May 17 00:12:02.970538 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 17 00:12:02.970569 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri May 16 22:39:35 -00 2025 May 17 00:12:02.970583 kernel: KASLR enabled May 17 00:12:02.970590 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II May 17 00:12:02.970597 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 May 17 00:12:02.970605 kernel: random: crng init done May 17 00:12:02.970613 kernel: ACPI: Early table checksum verification disabled May 17 00:12:02.970621 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) May 17 00:12:02.970628 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) May 17 00:12:02.970637 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:12:02.970645 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:12:02.970653 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:12:02.970660 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:12:02.970668 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:12:02.970677 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:12:02.970686 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:12:02.970693 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:12:02.970699 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:12:02.970706 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) May 17 00:12:02.970731 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 May 17 00:12:02.970739 kernel: NUMA: Failed to initialise from firmware May 17 00:12:02.970746 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] May 17 00:12:02.970752 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] May 17 00:12:02.970758 kernel: Zone ranges: May 17 00:12:02.970765 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] May 17 00:12:02.970774 kernel: DMA32 empty May 17 00:12:02.970781 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] May 17 00:12:02.970787 kernel: Movable zone start for each node May 17 00:12:02.970793 kernel: Early memory node ranges May 17 00:12:02.970800 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] May 17 00:12:02.970807 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] May 17 00:12:02.970813 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] May 17 00:12:02.970820 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] May 17 00:12:02.970827 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] May 17 00:12:02.970833 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] May 17 00:12:02.970839 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] May 17 00:12:02.970846 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] May 17 00:12:02.970856 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges May 17 00:12:02.970862 kernel: psci: probing for conduit method from ACPI. May 17 00:12:02.970868 kernel: psci: PSCIv1.1 detected in firmware. May 17 00:12:02.970877 kernel: psci: Using standard PSCI v0.2 function IDs May 17 00:12:02.970884 kernel: psci: Trusted OS migration not required May 17 00:12:02.970892 kernel: psci: SMC Calling Convention v1.1 May 17 00:12:02.970900 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 17 00:12:02.970907 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 17 00:12:02.970914 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 17 00:12:02.970921 kernel: pcpu-alloc: [0] 0 [0] 1 May 17 00:12:02.970928 kernel: Detected PIPT I-cache on CPU0 May 17 00:12:02.970935 kernel: CPU features: detected: GIC system register CPU interface May 17 00:12:02.970942 kernel: CPU features: detected: Hardware dirty bit management May 17 00:12:02.970948 kernel: CPU features: detected: Spectre-v4 May 17 00:12:02.970955 kernel: CPU features: detected: Spectre-BHB May 17 00:12:02.970963 kernel: CPU features: kernel page table isolation forced ON by KASLR May 17 00:12:02.970971 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 17 00:12:02.970978 kernel: CPU features: detected: ARM erratum 1418040 May 17 00:12:02.970985 kernel: CPU features: detected: SSBS not fully self-synchronizing May 17 00:12:02.970991 kernel: alternatives: applying boot alternatives May 17 00:12:02.971000 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=3554ca41327a0c5ba7e4ac1b3147487d73f35805806dcb20264133a9c301eb5d May 17 00:12:02.971007 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:12:02.971014 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:12:02.971021 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:12:02.971027 kernel: Fallback order for Node 0: 0 May 17 00:12:02.971034 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 May 17 00:12:02.971041 kernel: Policy zone: Normal May 17 00:12:02.971050 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:12:02.971057 kernel: software IO TLB: area num 2. May 17 00:12:02.971063 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) May 17 00:12:02.971071 kernel: Memory: 3882872K/4096000K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 213128K reserved, 0K cma-reserved) May 17 00:12:02.971078 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:12:02.971085 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:12:02.971093 kernel: rcu: RCU event tracing is enabled. May 17 00:12:02.971100 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:12:02.971106 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:12:02.971113 kernel: Tracing variant of Tasks RCU enabled. May 17 00:12:02.971120 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:12:02.971129 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:12:02.971135 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 17 00:12:02.971142 kernel: GICv3: 256 SPIs implemented May 17 00:12:02.971149 kernel: GICv3: 0 Extended SPIs implemented May 17 00:12:02.971156 kernel: Root IRQ handler: gic_handle_irq May 17 00:12:02.971162 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 17 00:12:02.971169 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 17 00:12:02.971176 kernel: ITS [mem 0x08080000-0x0809ffff] May 17 00:12:02.971183 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) May 17 00:12:02.971190 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) May 17 00:12:02.971197 kernel: GICv3: using LPI property table @0x00000001000e0000 May 17 00:12:02.971204 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 May 17 00:12:02.971212 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:12:02.971219 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:02.971226 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 17 00:12:02.971233 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 17 00:12:02.971240 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 17 00:12:02.971247 kernel: Console: colour dummy device 80x25 May 17 00:12:02.971254 kernel: ACPI: Core revision 20230628 May 17 00:12:02.971272 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 17 00:12:02.971279 kernel: pid_max: default: 32768 minimum: 301 May 17 00:12:02.971286 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:12:02.971295 kernel: landlock: Up and running. May 17 00:12:02.971302 kernel: SELinux: Initializing. May 17 00:12:02.971310 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:12:02.971317 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:12:02.971324 kernel: ACPI PPTT: PPTT table found, but unable to locate core 1 (1) May 17 00:12:02.971331 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:12:02.971339 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:12:02.971346 kernel: rcu: Hierarchical SRCU implementation. May 17 00:12:02.973427 kernel: rcu: Max phase no-delay instances is 400. May 17 00:12:02.973476 kernel: Platform MSI: ITS@0x8080000 domain created May 17 00:12:02.973484 kernel: PCI/MSI: ITS@0x8080000 domain created May 17 00:12:02.973492 kernel: Remapping and enabling EFI services. May 17 00:12:02.973499 kernel: smp: Bringing up secondary CPUs ... May 17 00:12:02.973506 kernel: Detected PIPT I-cache on CPU1 May 17 00:12:02.973514 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 17 00:12:02.973521 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 May 17 00:12:02.973528 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:02.973536 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 17 00:12:02.973546 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:12:02.973553 kernel: SMP: Total of 2 processors activated. May 17 00:12:02.973560 kernel: CPU features: detected: 32-bit EL0 Support May 17 00:12:02.973573 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 17 00:12:02.973582 kernel: CPU features: detected: Common not Private translations May 17 00:12:02.973589 kernel: CPU features: detected: CRC32 instructions May 17 00:12:02.973597 kernel: CPU features: detected: Enhanced Virtualization Traps May 17 00:12:02.973605 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 17 00:12:02.973613 kernel: CPU features: detected: LSE atomic instructions May 17 00:12:02.973620 kernel: CPU features: detected: Privileged Access Never May 17 00:12:02.973628 kernel: CPU features: detected: RAS Extension Support May 17 00:12:02.973637 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 17 00:12:02.973645 kernel: CPU: All CPU(s) started at EL1 May 17 00:12:02.973653 kernel: alternatives: applying system-wide alternatives May 17 00:12:02.973661 kernel: devtmpfs: initialized May 17 00:12:02.973669 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:12:02.973676 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:12:02.973686 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:12:02.973694 kernel: SMBIOS 3.0.0 present. May 17 00:12:02.973702 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 May 17 00:12:02.973710 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:12:02.973718 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 17 00:12:02.973728 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 17 00:12:02.973735 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 17 00:12:02.973744 kernel: audit: initializing netlink subsys (disabled) May 17 00:12:02.973751 kernel: audit: type=2000 audit(0.013:1): state=initialized audit_enabled=0 res=1 May 17 00:12:02.973761 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:12:02.973769 kernel: cpuidle: using governor menu May 17 00:12:02.973777 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 17 00:12:02.973784 kernel: ASID allocator initialised with 32768 entries May 17 00:12:02.973792 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:12:02.973799 kernel: Serial: AMBA PL011 UART driver May 17 00:12:02.973807 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 17 00:12:02.973815 kernel: Modules: 0 pages in range for non-PLT usage May 17 00:12:02.973823 kernel: Modules: 509024 pages in range for PLT usage May 17 00:12:02.973832 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:12:02.973840 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:12:02.973848 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 17 00:12:02.973856 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 17 00:12:02.973863 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:12:02.973870 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:12:02.973878 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 17 00:12:02.973886 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 17 00:12:02.973893 kernel: ACPI: Added _OSI(Module Device) May 17 00:12:02.973902 kernel: ACPI: Added _OSI(Processor Device) May 17 00:12:02.973909 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:12:02.973917 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:12:02.973924 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:12:02.973932 kernel: ACPI: Interpreter enabled May 17 00:12:02.973940 kernel: ACPI: Using GIC for interrupt routing May 17 00:12:02.973947 kernel: ACPI: MCFG table detected, 1 entries May 17 00:12:02.973955 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 17 00:12:02.973962 kernel: printk: console [ttyAMA0] enabled May 17 00:12:02.973972 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:12:02.974155 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:12:02.974235 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 17 00:12:02.974330 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 17 00:12:02.975528 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 17 00:12:02.975620 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 17 00:12:02.975630 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 17 00:12:02.975646 kernel: PCI host bridge to bus 0000:00 May 17 00:12:02.975727 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 17 00:12:02.975790 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 17 00:12:02.975870 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 17 00:12:02.975933 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:12:02.976039 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 17 00:12:02.976735 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 May 17 00:12:02.976841 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] May 17 00:12:02.976914 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] May 17 00:12:02.977000 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 May 17 00:12:02.977072 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] May 17 00:12:02.977150 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 May 17 00:12:02.977221 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] May 17 00:12:02.977323 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 May 17 00:12:02.977453 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] May 17 00:12:02.977549 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 May 17 00:12:02.977623 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] May 17 00:12:02.977700 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 May 17 00:12:02.977770 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] May 17 00:12:02.977854 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 May 17 00:12:02.977921 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] May 17 00:12:02.977997 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 May 17 00:12:02.978067 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] May 17 00:12:02.978157 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 May 17 00:12:02.978232 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] May 17 00:12:02.979482 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 May 17 00:12:02.979596 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] May 17 00:12:02.979674 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 May 17 00:12:02.979740 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] May 17 00:12:02.979820 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 May 17 00:12:02.979892 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] May 17 00:12:02.979970 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 17 00:12:02.980043 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] May 17 00:12:02.980125 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 May 17 00:12:02.980195 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] May 17 00:12:02.980296 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 May 17 00:12:02.981464 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] May 17 00:12:02.981566 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] May 17 00:12:02.981655 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 May 17 00:12:02.981726 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] May 17 00:12:02.981802 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 May 17 00:12:02.981873 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] May 17 00:12:02.981948 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] May 17 00:12:02.982027 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 May 17 00:12:02.982096 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] May 17 00:12:02.982179 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] May 17 00:12:02.982280 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 May 17 00:12:02.982393 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] May 17 00:12:02.982465 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] May 17 00:12:02.982535 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] May 17 00:12:02.982612 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:12:02.982685 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 May 17 00:12:02.982751 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 May 17 00:12:02.982821 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:12:02.982887 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:12:02.982954 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 May 17 00:12:02.983024 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:12:02.983092 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 May 17 00:12:02.983161 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 17 00:12:02.983231 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:12:02.983315 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 May 17 00:12:02.985466 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 00:12:02.985568 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 May 17 00:12:02.985640 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 May 17 00:12:02.985710 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 May 17 00:12:02.985784 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 17 00:12:02.985862 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 May 17 00:12:02.985928 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 May 17 00:12:02.986001 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 17 00:12:02.986069 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 May 17 00:12:02.986135 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 May 17 00:12:02.986205 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 17 00:12:02.986316 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 May 17 00:12:02.986797 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 May 17 00:12:02.986895 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 17 00:12:02.986965 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 May 17 00:12:02.987031 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 May 17 00:12:02.987101 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] May 17 00:12:02.987168 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] May 17 00:12:02.987242 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] May 17 00:12:02.987330 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] May 17 00:12:02.987669 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] May 17 00:12:02.987754 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] May 17 00:12:02.987825 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] May 17 00:12:02.987921 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] May 17 00:12:02.988000 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] May 17 00:12:02.988813 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] May 17 00:12:02.988932 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] May 17 00:12:02.989000 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] May 17 00:12:02.989078 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] May 17 00:12:02.989150 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] May 17 00:12:02.989219 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] May 17 00:12:02.989351 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] May 17 00:12:02.989568 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] May 17 00:12:02.989641 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] May 17 00:12:02.989711 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] May 17 00:12:02.989777 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] May 17 00:12:02.989847 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] May 17 00:12:02.989913 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] May 17 00:12:02.989982 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] May 17 00:12:02.990048 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] May 17 00:12:02.990121 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] May 17 00:12:02.990186 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] May 17 00:12:02.990280 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] May 17 00:12:02.990365 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] May 17 00:12:02.990442 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] May 17 00:12:02.990507 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] May 17 00:12:02.990575 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] May 17 00:12:02.990642 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] May 17 00:12:02.990716 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] May 17 00:12:02.990781 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] May 17 00:12:02.990847 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] May 17 00:12:02.990914 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] May 17 00:12:02.990990 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] May 17 00:12:02.991057 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] May 17 00:12:02.991130 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] May 17 00:12:02.991204 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] May 17 00:12:02.991292 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 17 00:12:02.991374 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] May 17 00:12:02.991444 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 17 00:12:02.991510 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] May 17 00:12:02.991577 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] May 17 00:12:02.991645 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] May 17 00:12:02.991724 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] May 17 00:12:02.991832 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 17 00:12:02.991914 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] May 17 00:12:02.991981 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] May 17 00:12:02.992050 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] May 17 00:12:02.992124 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] May 17 00:12:02.992196 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] May 17 00:12:02.992314 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 17 00:12:02.992443 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] May 17 00:12:02.992513 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] May 17 00:12:02.992578 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] May 17 00:12:02.992652 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] May 17 00:12:02.992719 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 17 00:12:02.992787 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] May 17 00:12:02.992862 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] May 17 00:12:02.992929 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] May 17 00:12:02.993004 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] May 17 00:12:02.993074 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] May 17 00:12:02.993142 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 17 00:12:02.993209 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] May 17 00:12:02.993298 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] May 17 00:12:02.993381 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] May 17 00:12:02.993464 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] May 17 00:12:02.993536 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] May 17 00:12:02.993604 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 17 00:12:02.993673 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] May 17 00:12:02.993738 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] May 17 00:12:02.993805 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] May 17 00:12:02.993905 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] May 17 00:12:02.993978 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] May 17 00:12:02.994052 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] May 17 00:12:02.994124 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 17 00:12:02.994192 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] May 17 00:12:02.994270 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] May 17 00:12:02.994343 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] May 17 00:12:02.994514 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 17 00:12:02.994588 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] May 17 00:12:02.994652 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] May 17 00:12:02.994724 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] May 17 00:12:02.994790 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 17 00:12:02.994856 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] May 17 00:12:02.994921 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] May 17 00:12:02.994986 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] May 17 00:12:02.995056 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 17 00:12:02.995119 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 17 00:12:02.995186 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 17 00:12:02.995336 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] May 17 00:12:02.995518 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] May 17 00:12:02.995589 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] May 17 00:12:02.995669 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] May 17 00:12:02.995740 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] May 17 00:12:02.995803 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] May 17 00:12:02.995883 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] May 17 00:12:02.995955 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] May 17 00:12:02.996032 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] May 17 00:12:02.996103 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] May 17 00:12:02.996170 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] May 17 00:12:02.996242 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] May 17 00:12:02.996332 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] May 17 00:12:02.996444 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] May 17 00:12:02.996509 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] May 17 00:12:02.996580 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] May 17 00:12:02.996641 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] May 17 00:12:02.996705 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] May 17 00:12:02.996776 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] May 17 00:12:02.996841 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] May 17 00:12:02.996901 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] May 17 00:12:02.996970 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] May 17 00:12:02.997030 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] May 17 00:12:02.997091 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] May 17 00:12:02.997170 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] May 17 00:12:02.997232 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] May 17 00:12:02.997308 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] May 17 00:12:02.997319 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 17 00:12:02.997327 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 17 00:12:02.997335 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 17 00:12:02.997343 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 17 00:12:02.997363 kernel: iommu: Default domain type: Translated May 17 00:12:02.997372 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 17 00:12:02.997380 kernel: efivars: Registered efivars operations May 17 00:12:02.997388 kernel: vgaarb: loaded May 17 00:12:02.997396 kernel: clocksource: Switched to clocksource arch_sys_counter May 17 00:12:02.997404 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:12:02.997412 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:12:02.997420 kernel: pnp: PnP ACPI init May 17 00:12:02.997499 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 17 00:12:02.997514 kernel: pnp: PnP ACPI: found 1 devices May 17 00:12:02.997522 kernel: NET: Registered PF_INET protocol family May 17 00:12:02.997530 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:12:02.997538 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 00:12:02.997546 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:12:02.997554 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:12:02.997562 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 17 00:12:02.997570 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 00:12:02.997578 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:12:02.997587 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:12:02.997595 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:12:02.997669 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) May 17 00:12:02.997695 kernel: PCI: CLS 0 bytes, default 64 May 17 00:12:02.997703 kernel: kvm [1]: HYP mode not available May 17 00:12:02.997711 kernel: Initialise system trusted keyrings May 17 00:12:02.997719 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 00:12:02.997727 kernel: Key type asymmetric registered May 17 00:12:02.997735 kernel: Asymmetric key parser 'x509' registered May 17 00:12:02.997746 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 17 00:12:02.997754 kernel: io scheduler mq-deadline registered May 17 00:12:02.997762 kernel: io scheduler kyber registered May 17 00:12:02.997770 kernel: io scheduler bfq registered May 17 00:12:02.997778 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 May 17 00:12:02.997856 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 May 17 00:12:02.997925 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 May 17 00:12:02.997991 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:12:02.998063 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 May 17 00:12:02.998130 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 May 17 00:12:02.998197 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:12:02.998318 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 May 17 00:12:02.998463 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 May 17 00:12:02.998538 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:12:02.998616 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 May 17 00:12:02.998686 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 May 17 00:12:02.998751 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:12:02.998818 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 May 17 00:12:02.998883 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 May 17 00:12:02.998948 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:12:02.999021 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 May 17 00:12:02.999087 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 May 17 00:12:02.999153 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:12:02.999220 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 May 17 00:12:02.999310 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 May 17 00:12:02.999412 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:12:02.999485 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 May 17 00:12:02.999553 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 May 17 00:12:02.999620 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:12:02.999631 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 May 17 00:12:02.999697 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 May 17 00:12:02.999765 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 May 17 00:12:02.999834 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:12:02.999845 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 17 00:12:02.999853 kernel: ACPI: button: Power Button [PWRB] May 17 00:12:02.999861 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 17 00:12:02.999931 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) May 17 00:12:03.000004 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) May 17 00:12:03.000015 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:12:03.000023 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 May 17 00:12:03.000093 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) May 17 00:12:03.000104 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A May 17 00:12:03.000112 kernel: thunder_xcv, ver 1.0 May 17 00:12:03.000120 kernel: thunder_bgx, ver 1.0 May 17 00:12:03.000128 kernel: nicpf, ver 1.0 May 17 00:12:03.000135 kernel: nicvf, ver 1.0 May 17 00:12:03.000215 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 17 00:12:03.000291 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-17T00:12:02 UTC (1747440722) May 17 00:12:03.000305 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 00:12:03.000313 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 17 00:12:03.000321 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 17 00:12:03.000329 kernel: watchdog: Hard watchdog permanently disabled May 17 00:12:03.000337 kernel: NET: Registered PF_INET6 protocol family May 17 00:12:03.000345 kernel: Segment Routing with IPv6 May 17 00:12:03.000352 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:12:03.000444 kernel: NET: Registered PF_PACKET protocol family May 17 00:12:03.000462 kernel: Key type dns_resolver registered May 17 00:12:03.000473 kernel: registered taskstats version 1 May 17 00:12:03.000482 kernel: Loading compiled-in X.509 certificates May 17 00:12:03.000490 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 02f7129968574a1ae76b1ee42e7674ea1c42071b' May 17 00:12:03.000498 kernel: Key type .fscrypt registered May 17 00:12:03.000505 kernel: Key type fscrypt-provisioning registered May 17 00:12:03.000513 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:12:03.000521 kernel: ima: Allocated hash algorithm: sha1 May 17 00:12:03.000529 kernel: ima: No architecture policies found May 17 00:12:03.000537 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 17 00:12:03.000546 kernel: clk: Disabling unused clocks May 17 00:12:03.000554 kernel: Freeing unused kernel memory: 39424K May 17 00:12:03.000562 kernel: Run /init as init process May 17 00:12:03.000570 kernel: with arguments: May 17 00:12:03.000577 kernel: /init May 17 00:12:03.000585 kernel: with environment: May 17 00:12:03.000592 kernel: HOME=/ May 17 00:12:03.000600 kernel: TERM=linux May 17 00:12:03.000607 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:12:03.000619 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:12:03.000630 systemd[1]: Detected virtualization kvm. May 17 00:12:03.000638 systemd[1]: Detected architecture arm64. May 17 00:12:03.000646 systemd[1]: Running in initrd. May 17 00:12:03.000655 systemd[1]: No hostname configured, using default hostname. May 17 00:12:03.000663 systemd[1]: Hostname set to . May 17 00:12:03.000671 systemd[1]: Initializing machine ID from VM UUID. May 17 00:12:03.000682 systemd[1]: Queued start job for default target initrd.target. May 17 00:12:03.000690 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:12:03.000699 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:12:03.000708 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:12:03.000716 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:12:03.000727 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:12:03.000736 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:12:03.000748 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:12:03.000757 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:12:03.000765 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:12:03.000774 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:12:03.000783 systemd[1]: Reached target paths.target - Path Units. May 17 00:12:03.000791 systemd[1]: Reached target slices.target - Slice Units. May 17 00:12:03.000800 systemd[1]: Reached target swap.target - Swaps. May 17 00:12:03.000808 systemd[1]: Reached target timers.target - Timer Units. May 17 00:12:03.000818 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:12:03.000827 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:12:03.000835 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:12:03.000843 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:12:03.000852 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:12:03.000861 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:12:03.000869 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:12:03.000877 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:12:03.000886 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:12:03.000896 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:12:03.000904 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:12:03.000913 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:12:03.000922 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:12:03.000930 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:12:03.000939 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:12:03.000947 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:12:03.000981 systemd-journald[235]: Collecting audit messages is disabled. May 17 00:12:03.001004 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:12:03.001013 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:12:03.001024 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:12:03.001032 kernel: Bridge firewalling registered May 17 00:12:03.001040 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:12:03.001049 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:12:03.001057 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:12:03.001066 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:12:03.001075 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:12:03.001086 systemd-journald[235]: Journal started May 17 00:12:03.001105 systemd-journald[235]: Runtime Journal (/run/log/journal/2e35d559ccb845539889e374fb97bafa) is 8.0M, max 76.6M, 68.6M free. May 17 00:12:02.958017 systemd-modules-load[236]: Inserted module 'overlay' May 17 00:12:02.978408 systemd-modules-load[236]: Inserted module 'br_netfilter' May 17 00:12:03.004481 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:12:03.021611 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:12:03.024566 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:12:03.025198 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:12:03.036713 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:12:03.040412 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:12:03.041539 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:12:03.047763 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:12:03.057719 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:12:03.060837 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:12:03.081954 dracut-cmdline[270]: dracut-dracut-053 May 17 00:12:03.089373 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=3554ca41327a0c5ba7e4ac1b3147487d73f35805806dcb20264133a9c301eb5d May 17 00:12:03.110305 systemd-resolved[272]: Positive Trust Anchors: May 17 00:12:03.110962 systemd-resolved[272]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:12:03.110999 systemd-resolved[272]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:12:03.122330 systemd-resolved[272]: Defaulting to hostname 'linux'. May 17 00:12:03.123723 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:12:03.125067 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:12:03.204433 kernel: SCSI subsystem initialized May 17 00:12:03.208460 kernel: Loading iSCSI transport class v2.0-870. May 17 00:12:03.216453 kernel: iscsi: registered transport (tcp) May 17 00:12:03.231430 kernel: iscsi: registered transport (qla4xxx) May 17 00:12:03.231510 kernel: QLogic iSCSI HBA Driver May 17 00:12:03.295198 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:12:03.301637 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:12:03.323400 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:12:03.323468 kernel: device-mapper: uevent: version 1.0.3 May 17 00:12:03.323480 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:12:03.377397 kernel: raid6: neonx8 gen() 15549 MB/s May 17 00:12:03.393443 kernel: raid6: neonx4 gen() 10167 MB/s May 17 00:12:03.411510 kernel: raid6: neonx2 gen() 12826 MB/s May 17 00:12:03.427429 kernel: raid6: neonx1 gen() 10356 MB/s May 17 00:12:03.444453 kernel: raid6: int64x8 gen() 6868 MB/s May 17 00:12:03.461431 kernel: raid6: int64x4 gen() 7265 MB/s May 17 00:12:03.478469 kernel: raid6: int64x2 gen() 6077 MB/s May 17 00:12:03.495453 kernel: raid6: int64x1 gen() 5014 MB/s May 17 00:12:03.495560 kernel: raid6: using algorithm neonx8 gen() 15549 MB/s May 17 00:12:03.512401 kernel: raid6: .... xor() 11865 MB/s, rmw enabled May 17 00:12:03.512487 kernel: raid6: using neon recovery algorithm May 17 00:12:03.517557 kernel: xor: measuring software checksum speed May 17 00:12:03.517614 kernel: 8regs : 19802 MB/sec May 17 00:12:03.517638 kernel: 32regs : 19636 MB/sec May 17 00:12:03.517671 kernel: arm64_neon : 26927 MB/sec May 17 00:12:03.518390 kernel: xor: using function: arm64_neon (26927 MB/sec) May 17 00:12:03.571427 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:12:03.586234 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:12:03.592591 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:12:03.619954 systemd-udevd[454]: Using default interface naming scheme 'v255'. May 17 00:12:03.623883 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:12:03.634994 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:12:03.657216 dracut-pre-trigger[460]: rd.md=0: removing MD RAID activation May 17 00:12:03.696371 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:12:03.701593 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:12:03.773753 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:12:03.781048 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:12:03.807509 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:12:03.810785 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:12:03.812829 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:12:03.814733 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:12:03.819590 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:12:03.848168 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:12:03.895387 kernel: ACPI: bus type USB registered May 17 00:12:03.895452 kernel: usbcore: registered new interface driver usbfs May 17 00:12:03.911428 kernel: usbcore: registered new interface driver hub May 17 00:12:03.911498 kernel: usbcore: registered new device driver usb May 17 00:12:03.921720 kernel: scsi host0: Virtio SCSI HBA May 17 00:12:03.920736 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:12:03.920881 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:12:03.923745 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:12:03.924304 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:12:03.924510 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:12:03.925130 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:12:03.931458 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 May 17 00:12:03.931577 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 17 00:12:03.940461 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:12:03.952199 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:12:03.959622 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:12:03.962939 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 17 00:12:03.963179 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 May 17 00:12:03.963294 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 May 17 00:12:03.967944 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 17 00:12:03.968166 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 May 17 00:12:03.968292 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed May 17 00:12:03.969403 kernel: sr 0:0:0:0: Power-on or device reset occurred May 17 00:12:03.969624 kernel: hub 1-0:1.0: USB hub found May 17 00:12:03.971041 kernel: hub 1-0:1.0: 4 ports detected May 17 00:12:03.971224 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray May 17 00:12:03.971374 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 00:12:03.972403 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 May 17 00:12:03.973373 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 17 00:12:03.976380 kernel: hub 2-0:1.0: USB hub found May 17 00:12:03.976597 kernel: hub 2-0:1.0: 4 ports detected May 17 00:12:03.989567 kernel: sd 0:0:0:1: Power-on or device reset occurred May 17 00:12:03.992742 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) May 17 00:12:03.992992 kernel: sd 0:0:0:1: [sda] Write Protect is off May 17 00:12:03.993085 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 May 17 00:12:03.994391 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 00:12:04.003570 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:12:04.003649 kernel: GPT:17805311 != 80003071 May 17 00:12:04.003662 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:12:04.003674 kernel: GPT:17805311 != 80003071 May 17 00:12:04.003684 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:12:04.003695 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:12:04.005398 kernel: sd 0:0:0:1: [sda] Attached SCSI disk May 17 00:12:04.004218 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:12:04.049391 kernel: BTRFS: device fsid 4797bc80-d55e-4b4a-8ede-cb88964b0162 devid 1 transid 43 /dev/sda3 scanned by (udev-worker) (498) May 17 00:12:04.054383 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (500) May 17 00:12:04.063698 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 17 00:12:04.072058 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 17 00:12:04.080047 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 17 00:12:04.089496 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 17 00:12:04.090170 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 17 00:12:04.103987 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:12:04.112088 disk-uuid[570]: Primary Header is updated. May 17 00:12:04.112088 disk-uuid[570]: Secondary Entries is updated. May 17 00:12:04.112088 disk-uuid[570]: Secondary Header is updated. May 17 00:12:04.116533 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:12:04.214393 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd May 17 00:12:04.349898 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 May 17 00:12:04.349980 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 May 17 00:12:04.350333 kernel: usbcore: registered new interface driver usbhid May 17 00:12:04.350396 kernel: usbhid: USB HID core driver May 17 00:12:04.458449 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd May 17 00:12:04.588422 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 May 17 00:12:04.641414 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 May 17 00:12:05.137380 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:12:05.138592 disk-uuid[571]: The operation has completed successfully. May 17 00:12:05.199583 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:12:05.200473 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:12:05.210611 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:12:05.226114 sh[590]: Success May 17 00:12:05.239407 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 17 00:12:05.301852 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:12:05.314794 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:12:05.319196 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:12:05.332933 kernel: BTRFS info (device dm-0): first mount of filesystem 4797bc80-d55e-4b4a-8ede-cb88964b0162 May 17 00:12:05.333061 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 17 00:12:05.333102 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:12:05.333592 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:12:05.334393 kernel: BTRFS info (device dm-0): using free space tree May 17 00:12:05.340978 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 00:12:05.342584 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:12:05.343769 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:12:05.349765 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:12:05.360563 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:12:05.370498 kernel: BTRFS info (device sda6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:12:05.370561 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:12:05.370585 kernel: BTRFS info (device sda6): using free space tree May 17 00:12:05.376626 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:12:05.376687 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:12:05.395097 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:12:05.396501 kernel: BTRFS info (device sda6): last unmount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:12:05.404286 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:12:05.412677 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:12:05.503857 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:12:05.513863 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:12:05.524631 ignition[674]: Ignition 2.19.0 May 17 00:12:05.525342 ignition[674]: Stage: fetch-offline May 17 00:12:05.525517 ignition[674]: no configs at "/usr/lib/ignition/base.d" May 17 00:12:05.525529 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:12:05.525718 ignition[674]: parsed url from cmdline: "" May 17 00:12:05.525721 ignition[674]: no config URL provided May 17 00:12:05.525726 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:12:05.525735 ignition[674]: no config at "/usr/lib/ignition/user.ign" May 17 00:12:05.525740 ignition[674]: failed to fetch config: resource requires networking May 17 00:12:05.529953 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:12:05.526023 ignition[674]: Ignition finished successfully May 17 00:12:05.540118 systemd-networkd[777]: lo: Link UP May 17 00:12:05.540133 systemd-networkd[777]: lo: Gained carrier May 17 00:12:05.541940 systemd-networkd[777]: Enumeration completed May 17 00:12:05.542080 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:12:05.543084 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:12:05.543087 systemd-networkd[777]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:12:05.543931 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:12:05.543934 systemd-networkd[777]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:12:05.544073 systemd[1]: Reached target network.target - Network. May 17 00:12:05.546970 systemd-networkd[777]: eth0: Link UP May 17 00:12:05.546973 systemd-networkd[777]: eth0: Gained carrier May 17 00:12:05.546983 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:12:05.551200 systemd-networkd[777]: eth1: Link UP May 17 00:12:05.551259 systemd-networkd[777]: eth1: Gained carrier May 17 00:12:05.551269 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:12:05.552648 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 17 00:12:05.566638 ignition[781]: Ignition 2.19.0 May 17 00:12:05.566647 ignition[781]: Stage: fetch May 17 00:12:05.566849 ignition[781]: no configs at "/usr/lib/ignition/base.d" May 17 00:12:05.566858 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:12:05.566969 ignition[781]: parsed url from cmdline: "" May 17 00:12:05.566973 ignition[781]: no config URL provided May 17 00:12:05.566978 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:12:05.566986 ignition[781]: no config at "/usr/lib/ignition/user.ign" May 17 00:12:05.567005 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 May 17 00:12:05.567801 ignition[781]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable May 17 00:12:05.581483 systemd-networkd[777]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:12:05.616495 systemd-networkd[777]: eth0: DHCPv4 address 142.132.181.146/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 17 00:12:05.768713 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 May 17 00:12:05.776207 ignition[781]: GET result: OK May 17 00:12:05.776410 ignition[781]: parsing config with SHA512: aec6a6da0c07a0d80638fe1053b230ba7cdee3a2dba3ebb64a3ebaca99228d0e0501024c5293d98c1a994bebcbf39786c636c3c49565b6f666986ebbb604e744 May 17 00:12:05.782508 unknown[781]: fetched base config from "system" May 17 00:12:05.782517 unknown[781]: fetched base config from "system" May 17 00:12:05.782942 ignition[781]: fetch: fetch complete May 17 00:12:05.782522 unknown[781]: fetched user config from "hetzner" May 17 00:12:05.782948 ignition[781]: fetch: fetch passed May 17 00:12:05.784757 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 17 00:12:05.783000 ignition[781]: Ignition finished successfully May 17 00:12:05.791567 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:12:05.810321 ignition[789]: Ignition 2.19.0 May 17 00:12:05.810333 ignition[789]: Stage: kargs May 17 00:12:05.810555 ignition[789]: no configs at "/usr/lib/ignition/base.d" May 17 00:12:05.810565 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:12:05.813688 ignition[789]: kargs: kargs passed May 17 00:12:05.814175 ignition[789]: Ignition finished successfully May 17 00:12:05.816630 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:12:05.824611 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:12:05.838127 ignition[796]: Ignition 2.19.0 May 17 00:12:05.838136 ignition[796]: Stage: disks May 17 00:12:05.838427 ignition[796]: no configs at "/usr/lib/ignition/base.d" May 17 00:12:05.838439 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:12:05.839505 ignition[796]: disks: disks passed May 17 00:12:05.842697 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:12:05.839566 ignition[796]: Ignition finished successfully May 17 00:12:05.844078 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:12:05.846112 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:12:05.846948 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:12:05.848106 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:12:05.849125 systemd[1]: Reached target basic.target - Basic System. May 17 00:12:05.855553 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:12:05.872889 systemd-fsck[804]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 17 00:12:05.878047 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:12:05.885634 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:12:05.948395 kernel: EXT4-fs (sda9): mounted filesystem 50a777b7-c00f-4923-84ce-1c186fc0fd3b r/w with ordered data mode. Quota mode: none. May 17 00:12:05.949707 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:12:05.951539 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:12:05.961569 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:12:05.964986 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:12:05.977382 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (812) May 17 00:12:05.978769 kernel: BTRFS info (device sda6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:12:05.978822 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:12:05.979380 kernel: BTRFS info (device sda6): using free space tree May 17 00:12:05.979598 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 17 00:12:05.983394 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:12:05.984341 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:12:05.988301 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:12:05.992929 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:12:05.992957 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:12:05.994138 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:12:06.003806 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:12:06.038598 coreos-metadata[814]: May 17 00:12:06.038 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 May 17 00:12:06.040491 coreos-metadata[814]: May 17 00:12:06.040 INFO Fetch successful May 17 00:12:06.042537 coreos-metadata[814]: May 17 00:12:06.042 INFO wrote hostname ci-4081-3-3-n-16326e39d6 to /sysroot/etc/hostname May 17 00:12:06.048821 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:12:06.059173 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:12:06.065417 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory May 17 00:12:06.070624 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:12:06.076390 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:12:06.180542 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:12:06.185495 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:12:06.188595 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:12:06.198439 kernel: BTRFS info (device sda6): last unmount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:12:06.224633 ignition[929]: INFO : Ignition 2.19.0 May 17 00:12:06.226003 ignition[929]: INFO : Stage: mount May 17 00:12:06.226003 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:12:06.226003 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:12:06.228554 ignition[929]: INFO : mount: mount passed May 17 00:12:06.228554 ignition[929]: INFO : Ignition finished successfully May 17 00:12:06.230427 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:12:06.231407 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:12:06.245591 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:12:06.333725 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:12:06.341699 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:12:06.352414 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (941) May 17 00:12:06.354400 kernel: BTRFS info (device sda6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:12:06.354466 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:12:06.354488 kernel: BTRFS info (device sda6): using free space tree May 17 00:12:06.358453 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:12:06.358526 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:12:06.360979 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:12:06.388397 ignition[958]: INFO : Ignition 2.19.0 May 17 00:12:06.390493 ignition[958]: INFO : Stage: files May 17 00:12:06.390493 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:12:06.390493 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:12:06.392461 ignition[958]: DEBUG : files: compiled without relabeling support, skipping May 17 00:12:06.396483 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:12:06.397525 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:12:06.400798 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:12:06.402418 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:12:06.403432 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:12:06.402766 unknown[958]: wrote ssh authorized keys file for user: core May 17 00:12:06.405341 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 17 00:12:06.405341 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 17 00:12:06.517806 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:12:07.121652 systemd-networkd[777]: eth1: Gained IPv6LL May 17 00:12:07.318479 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 17 00:12:07.318479 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 17 00:12:07.322921 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:12:07.322921 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:12:07.322921 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:12:07.322921 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:12:07.322921 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:12:07.322921 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:12:07.322921 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:12:07.322921 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:12:07.322921 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:12:07.322921 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 17 00:12:07.322921 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 17 00:12:07.322921 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 17 00:12:07.322921 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 May 17 00:12:07.505713 systemd-networkd[777]: eth0: Gained IPv6LL May 17 00:12:07.980463 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 17 00:12:08.222370 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 17 00:12:08.222370 ignition[958]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 17 00:12:08.225662 ignition[958]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:12:08.225662 ignition[958]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:12:08.225662 ignition[958]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 17 00:12:08.225662 ignition[958]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 17 00:12:08.225662 ignition[958]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 17 00:12:08.225662 ignition[958]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 17 00:12:08.225662 ignition[958]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 17 00:12:08.225662 ignition[958]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" May 17 00:12:08.225662 ignition[958]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:12:08.236832 ignition[958]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:12:08.236832 ignition[958]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:12:08.236832 ignition[958]: INFO : files: files passed May 17 00:12:08.236832 ignition[958]: INFO : Ignition finished successfully May 17 00:12:08.229680 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:12:08.240166 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:12:08.243629 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:12:08.245732 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:12:08.247447 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:12:08.260567 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:12:08.260567 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:12:08.263457 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:12:08.265940 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:12:08.266855 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:12:08.274625 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:12:08.311286 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:12:08.311451 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:12:08.313099 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:12:08.314036 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:12:08.315116 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:12:08.321669 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:12:08.342416 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:12:08.348632 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:12:08.362958 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:12:08.364479 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:12:08.365829 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:12:08.366399 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:12:08.366525 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:12:08.368408 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:12:08.369025 systemd[1]: Stopped target basic.target - Basic System. May 17 00:12:08.370670 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:12:08.371798 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:12:08.374528 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:12:08.375614 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:12:08.377822 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:12:08.379879 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:12:08.380757 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:12:08.381930 systemd[1]: Stopped target swap.target - Swaps. May 17 00:12:08.383128 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:12:08.383334 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:12:08.384653 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:12:08.385468 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:12:08.386872 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:12:08.386985 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:12:08.388337 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:12:08.388535 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:12:08.390783 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:12:08.390963 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:12:08.392137 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:12:08.392256 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:12:08.393010 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:12:08.393106 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:12:08.403700 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:12:08.404628 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:12:08.404855 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:12:08.409606 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:12:08.410611 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:12:08.410762 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:12:08.414427 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:12:08.414775 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:12:08.426066 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:12:08.426914 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:12:08.432689 ignition[1010]: INFO : Ignition 2.19.0 May 17 00:12:08.432689 ignition[1010]: INFO : Stage: umount May 17 00:12:08.432689 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:12:08.432689 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:12:08.437744 ignition[1010]: INFO : umount: umount passed May 17 00:12:08.437744 ignition[1010]: INFO : Ignition finished successfully May 17 00:12:08.434591 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:12:08.434716 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:12:08.435931 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:12:08.435986 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:12:08.438298 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:12:08.438417 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:12:08.439108 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:12:08.439152 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 17 00:12:08.439986 systemd[1]: Stopped target network.target - Network. May 17 00:12:08.441105 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:12:08.441169 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:12:08.443771 systemd[1]: Stopped target paths.target - Path Units. May 17 00:12:08.444261 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:12:08.447478 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:12:08.448162 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:12:08.449196 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:12:08.450128 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:12:08.450176 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:12:08.451230 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:12:08.451273 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:12:08.452174 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:12:08.452259 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:12:08.452993 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:12:08.453032 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:12:08.454057 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:12:08.455114 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:12:08.457429 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:12:08.458007 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:12:08.458088 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:12:08.459353 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:12:08.459480 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:12:08.464480 systemd-networkd[777]: eth1: DHCPv6 lease lost May 17 00:12:08.467450 systemd-networkd[777]: eth0: DHCPv6 lease lost May 17 00:12:08.468455 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:12:08.469141 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:12:08.471777 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:12:08.472785 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:12:08.475789 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:12:08.475879 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:12:08.482538 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:12:08.483093 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:12:08.483169 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:12:08.485488 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:12:08.485553 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:12:08.486307 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:12:08.486381 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:12:08.488157 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:12:08.488207 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:12:08.489714 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:12:08.504196 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:12:08.504400 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:12:08.517593 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:12:08.517815 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:12:08.519724 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:12:08.519774 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:12:08.521308 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:12:08.521347 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:12:08.522936 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:12:08.522990 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:12:08.524561 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:12:08.524611 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:12:08.526773 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:12:08.526841 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:12:08.538688 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:12:08.540608 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:12:08.540784 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:12:08.542504 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:12:08.542585 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:12:08.550320 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:12:08.550474 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:12:08.552202 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:12:08.557639 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:12:08.567902 systemd[1]: Switching root. May 17 00:12:08.601292 systemd-journald[235]: Journal stopped May 17 00:12:09.542107 systemd-journald[235]: Received SIGTERM from PID 1 (systemd). May 17 00:12:09.542168 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:12:09.542180 kernel: SELinux: policy capability open_perms=1 May 17 00:12:09.542190 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:12:09.542215 kernel: SELinux: policy capability always_check_network=0 May 17 00:12:09.542231 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:12:09.542241 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:12:09.542251 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:12:09.542263 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:12:09.542273 kernel: audit: type=1403 audit(1747440728.790:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:12:09.542288 systemd[1]: Successfully loaded SELinux policy in 36.355ms. May 17 00:12:09.542308 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.285ms. May 17 00:12:09.542320 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:12:09.542331 systemd[1]: Detected virtualization kvm. May 17 00:12:09.542341 systemd[1]: Detected architecture arm64. May 17 00:12:09.542352 systemd[1]: Detected first boot. May 17 00:12:09.542724 systemd[1]: Hostname set to . May 17 00:12:09.542739 systemd[1]: Initializing machine ID from VM UUID. May 17 00:12:09.542750 zram_generator::config[1053]: No configuration found. May 17 00:12:09.542765 systemd[1]: Populated /etc with preset unit settings. May 17 00:12:09.542780 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:12:09.542791 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 17 00:12:09.542801 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:12:09.542813 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 00:12:09.542825 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 00:12:09.542836 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 00:12:09.542846 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 00:12:09.542857 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 00:12:09.542872 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 00:12:09.542887 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 00:12:09.542897 systemd[1]: Created slice user.slice - User and Session Slice. May 17 00:12:09.542908 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:12:09.542918 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:12:09.542930 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 00:12:09.542941 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 00:12:09.542951 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 00:12:09.542962 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:12:09.542972 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 17 00:12:09.542983 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:12:09.542993 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 17 00:12:09.543007 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 17 00:12:09.543017 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 17 00:12:09.543028 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 00:12:09.543038 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:12:09.543049 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:12:09.543059 systemd[1]: Reached target slices.target - Slice Units. May 17 00:12:09.543070 systemd[1]: Reached target swap.target - Swaps. May 17 00:12:09.543080 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 00:12:09.543093 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 00:12:09.543103 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:12:09.543114 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:12:09.543125 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:12:09.543136 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 00:12:09.543146 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 00:12:09.543156 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 00:12:09.543167 systemd[1]: Mounting media.mount - External Media Directory... May 17 00:12:09.543177 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 00:12:09.543189 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 00:12:09.543200 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 00:12:09.543253 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:12:09.543265 systemd[1]: Reached target machines.target - Containers. May 17 00:12:09.543277 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 00:12:09.543288 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:12:09.543303 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:12:09.543318 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 00:12:09.543329 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:12:09.543339 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:12:09.543350 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:12:09.543380 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 00:12:09.543392 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:12:09.543404 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:12:09.543417 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:12:09.543428 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 17 00:12:09.543439 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:12:09.543450 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:12:09.543460 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:12:09.543471 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:12:09.543482 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 00:12:09.543493 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 00:12:09.543503 kernel: loop: module loaded May 17 00:12:09.543516 kernel: fuse: init (API version 7.39) May 17 00:12:09.543526 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:12:09.543539 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:12:09.543550 systemd[1]: Stopped verity-setup.service. May 17 00:12:09.543560 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 00:12:09.543572 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 00:12:09.543583 systemd[1]: Mounted media.mount - External Media Directory. May 17 00:12:09.543594 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 00:12:09.543604 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 00:12:09.543615 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 00:12:09.543653 systemd-journald[1123]: Collecting audit messages is disabled. May 17 00:12:09.543676 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:12:09.543689 systemd-journald[1123]: Journal started May 17 00:12:09.543712 systemd-journald[1123]: Runtime Journal (/run/log/journal/2e35d559ccb845539889e374fb97bafa) is 8.0M, max 76.6M, 68.6M free. May 17 00:12:09.302979 systemd[1]: Queued start job for default target multi-user.target. May 17 00:12:09.325022 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 17 00:12:09.325697 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:12:09.546432 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:12:09.547807 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:12:09.547951 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 00:12:09.548857 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:12:09.550423 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:12:09.551289 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:12:09.551521 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:12:09.552868 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:12:09.553409 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 00:12:09.554746 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:12:09.554894 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:12:09.556984 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:12:09.557959 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 00:12:09.559697 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 00:12:09.579473 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 00:12:09.587400 kernel: ACPI: bus type drm_connector registered May 17 00:12:09.588463 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 00:12:09.595563 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 00:12:09.598450 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:12:09.598495 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:12:09.600021 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 00:12:09.606577 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 00:12:09.611083 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 00:12:09.611814 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:12:09.617520 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 00:12:09.622658 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 00:12:09.624467 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:12:09.627690 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 00:12:09.629074 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:12:09.635579 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:12:09.644752 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 00:12:09.647086 systemd-journald[1123]: Time spent on flushing to /var/log/journal/2e35d559ccb845539889e374fb97bafa is 96.171ms for 1119 entries. May 17 00:12:09.647086 systemd-journald[1123]: System Journal (/var/log/journal/2e35d559ccb845539889e374fb97bafa) is 8.0M, max 584.8M, 576.8M free. May 17 00:12:09.759530 systemd-journald[1123]: Received client request to flush runtime journal. May 17 00:12:09.759604 kernel: loop0: detected capacity change from 0 to 114328 May 17 00:12:09.759619 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:12:09.648837 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 00:12:09.649923 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:12:09.650134 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:12:09.652731 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 00:12:09.653704 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 00:12:09.655055 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 00:12:09.668171 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:12:09.682551 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 00:12:09.688698 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 00:12:09.703391 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 00:12:09.704138 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 00:12:09.712726 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 00:12:09.729948 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:12:09.766890 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 00:12:09.771731 udevadm[1174]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 17 00:12:09.785457 kernel: loop1: detected capacity change from 0 to 8 May 17 00:12:09.788178 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:12:09.791568 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 00:12:09.799750 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 00:12:09.806627 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:12:09.815401 kernel: loop2: detected capacity change from 0 to 207008 May 17 00:12:09.839650 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. May 17 00:12:09.841938 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. May 17 00:12:09.851590 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:12:09.872413 kernel: loop3: detected capacity change from 0 to 114432 May 17 00:12:09.902382 kernel: loop4: detected capacity change from 0 to 114328 May 17 00:12:09.927382 kernel: loop5: detected capacity change from 0 to 8 May 17 00:12:09.927487 kernel: loop6: detected capacity change from 0 to 207008 May 17 00:12:09.958391 kernel: loop7: detected capacity change from 0 to 114432 May 17 00:12:09.979570 (sd-merge)[1192]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. May 17 00:12:09.980044 (sd-merge)[1192]: Merged extensions into '/usr'. May 17 00:12:09.988667 systemd[1]: Reloading requested from client PID 1165 ('systemd-sysext') (unit systemd-sysext.service)... May 17 00:12:09.989044 systemd[1]: Reloading... May 17 00:12:10.120128 zram_generator::config[1218]: No configuration found. May 17 00:12:10.181501 ldconfig[1160]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:12:10.249965 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:12:10.297141 systemd[1]: Reloading finished in 307 ms. May 17 00:12:10.333757 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 00:12:10.337435 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 00:12:10.351646 systemd[1]: Starting ensure-sysext.service... May 17 00:12:10.355695 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:12:10.368073 systemd[1]: Reloading requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)... May 17 00:12:10.368096 systemd[1]: Reloading... May 17 00:12:10.392138 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:12:10.393007 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 00:12:10.393967 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:12:10.394534 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. May 17 00:12:10.394703 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. May 17 00:12:10.398539 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:12:10.398664 systemd-tmpfiles[1257]: Skipping /boot May 17 00:12:10.410487 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:12:10.410503 systemd-tmpfiles[1257]: Skipping /boot May 17 00:12:10.445447 zram_generator::config[1282]: No configuration found. May 17 00:12:10.559032 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:12:10.605985 systemd[1]: Reloading finished in 237 ms. May 17 00:12:10.626148 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 00:12:10.632304 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:12:10.646875 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:12:10.652777 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 00:12:10.658746 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 00:12:10.663963 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:12:10.670679 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:12:10.685719 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 00:12:10.690133 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:12:10.693332 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:12:10.699242 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:12:10.705130 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:12:10.706536 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:12:10.708533 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 00:12:10.715625 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:12:10.715803 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:12:10.725265 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 00:12:10.729799 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 00:12:10.740305 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:12:10.748885 systemd-udevd[1331]: Using default interface naming scheme 'v255'. May 17 00:12:10.749784 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:12:10.751039 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:12:10.752081 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 00:12:10.754803 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:12:10.755458 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:12:10.761221 systemd[1]: Finished ensure-sysext.service. May 17 00:12:10.765135 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:12:10.765783 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:12:10.767447 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:12:10.767600 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:12:10.772626 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 00:12:10.780401 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:12:10.780482 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:12:10.786718 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 17 00:12:10.789573 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:12:10.789749 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:12:10.800266 augenrules[1359]: No rules May 17 00:12:10.802634 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:12:10.810732 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:12:10.818695 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:12:10.825725 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 00:12:10.831531 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 00:12:10.834035 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:12:10.915965 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 17 00:12:10.962266 systemd-networkd[1369]: lo: Link UP May 17 00:12:10.963805 systemd-networkd[1369]: lo: Gained carrier May 17 00:12:10.964576 systemd-networkd[1369]: Enumeration completed May 17 00:12:10.964724 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:12:10.977821 systemd-resolved[1328]: Positive Trust Anchors: May 17 00:12:10.977841 systemd-resolved[1328]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:12:10.977873 systemd-resolved[1328]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:12:10.979920 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 00:12:10.980702 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 17 00:12:10.981561 systemd[1]: Reached target time-set.target - System Time Set. May 17 00:12:10.986773 systemd-resolved[1328]: Using system hostname 'ci-4081-3-3-n-16326e39d6'. May 17 00:12:10.988976 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:12:10.989967 systemd[1]: Reached target network.target - Network. May 17 00:12:10.990620 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:12:11.045771 systemd-networkd[1369]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:12:11.045781 systemd-networkd[1369]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:12:11.046590 systemd-networkd[1369]: eth1: Link UP May 17 00:12:11.046594 systemd-networkd[1369]: eth1: Gained carrier May 17 00:12:11.046612 systemd-networkd[1369]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:12:11.061578 systemd-networkd[1369]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:12:11.062110 systemd-networkd[1369]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:12:11.064962 systemd-networkd[1369]: eth0: Link UP May 17 00:12:11.064971 systemd-networkd[1369]: eth0: Gained carrier May 17 00:12:11.065096 systemd-networkd[1369]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:12:11.082454 systemd-networkd[1369]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:12:11.085785 systemd-timesyncd[1356]: Network configuration changed, trying to establish connection. May 17 00:12:11.090401 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 43 scanned by (udev-worker) (1374) May 17 00:12:11.112814 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:12:11.122672 systemd-networkd[1369]: eth0: DHCPv4 address 142.132.181.146/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 17 00:12:11.123256 systemd-timesyncd[1356]: Network configuration changed, trying to establish connection. May 17 00:12:11.124708 systemd-timesyncd[1356]: Network configuration changed, trying to establish connection. May 17 00:12:11.157426 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. May 17 00:12:11.159763 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:12:11.165574 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:12:11.168869 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:12:11.173348 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:12:11.175502 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:12:11.175542 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:12:11.180844 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:12:11.182395 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:12:11.184469 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:12:11.184631 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:12:11.188240 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:12:11.199851 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:12:11.201549 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:12:11.205281 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:12:11.217666 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 17 00:12:11.225792 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 00:12:11.246222 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 May 17 00:12:11.246317 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 17 00:12:11.246332 kernel: [drm] features: -context_init May 17 00:12:11.248394 kernel: [drm] number of scanouts: 1 May 17 00:12:11.248494 kernel: [drm] number of cap sets: 0 May 17 00:12:11.248507 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 May 17 00:12:11.256791 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:12:11.258535 kernel: Console: switching to colour frame buffer device 160x50 May 17 00:12:11.266482 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 17 00:12:11.268671 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 00:12:11.279148 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:12:11.279647 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:12:11.289708 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:12:11.369352 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:12:11.408014 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 00:12:11.417793 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 00:12:11.431421 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:12:11.457513 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 00:12:11.459239 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:12:11.460086 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:12:11.461161 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 00:12:11.462031 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 00:12:11.463047 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 00:12:11.463874 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 00:12:11.464763 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 00:12:11.465444 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:12:11.465476 systemd[1]: Reached target paths.target - Path Units. May 17 00:12:11.465954 systemd[1]: Reached target timers.target - Timer Units. May 17 00:12:11.468000 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 00:12:11.470256 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 00:12:11.477182 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 00:12:11.480798 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 00:12:11.483784 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 00:12:11.486127 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:12:11.487830 systemd[1]: Reached target basic.target - Basic System. May 17 00:12:11.488671 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 00:12:11.489813 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:12:11.488823 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 00:12:11.494614 systemd[1]: Starting containerd.service - containerd container runtime... May 17 00:12:11.500127 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 17 00:12:11.502610 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 00:12:11.515746 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 00:12:11.519777 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 00:12:11.521549 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 00:12:11.528517 coreos-metadata[1443]: May 17 00:12:11.527 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 May 17 00:12:11.530656 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 00:12:11.533166 coreos-metadata[1443]: May 17 00:12:11.530 INFO Fetch successful May 17 00:12:11.533166 coreos-metadata[1443]: May 17 00:12:11.533 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 May 17 00:12:11.533351 jq[1445]: false May 17 00:12:11.533498 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 00:12:11.535270 coreos-metadata[1443]: May 17 00:12:11.533 INFO Fetch successful May 17 00:12:11.539585 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. May 17 00:12:11.545198 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 00:12:11.550594 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 00:12:11.555784 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 00:12:11.558241 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:12:11.558763 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:12:11.563087 systemd[1]: Starting update-engine.service - Update Engine... May 17 00:12:11.566643 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 00:12:11.570285 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 00:12:11.592806 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:12:11.592998 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 00:12:11.594835 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:12:11.595740 dbus-daemon[1444]: [system] SELinux support is enabled May 17 00:12:11.596409 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 00:12:11.597154 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 00:12:11.602594 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:12:11.603880 extend-filesystems[1448]: Found loop4 May 17 00:12:11.604411 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 00:12:11.605331 extend-filesystems[1448]: Found loop5 May 17 00:12:11.605331 extend-filesystems[1448]: Found loop6 May 17 00:12:11.605331 extend-filesystems[1448]: Found loop7 May 17 00:12:11.605331 extend-filesystems[1448]: Found sda May 17 00:12:11.605331 extend-filesystems[1448]: Found sda1 May 17 00:12:11.605331 extend-filesystems[1448]: Found sda2 May 17 00:12:11.605331 extend-filesystems[1448]: Found sda3 May 17 00:12:11.605331 extend-filesystems[1448]: Found usr May 17 00:12:11.605331 extend-filesystems[1448]: Found sda4 May 17 00:12:11.605331 extend-filesystems[1448]: Found sda6 May 17 00:12:11.605331 extend-filesystems[1448]: Found sda7 May 17 00:12:11.605331 extend-filesystems[1448]: Found sda9 May 17 00:12:11.605331 extend-filesystems[1448]: Checking size of /dev/sda9 May 17 00:12:11.624146 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:12:11.624262 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 00:12:11.630458 jq[1462]: true May 17 00:12:11.631576 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:12:11.631689 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 00:12:11.648646 extend-filesystems[1448]: Resized partition /dev/sda9 May 17 00:12:11.651818 (ntainerd)[1479]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 00:12:11.661247 extend-filesystems[1484]: resize2fs 1.47.1 (20-May-2024) May 17 00:12:11.675491 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks May 17 00:12:11.681078 tar[1467]: linux-arm64/LICENSE May 17 00:12:11.687088 tar[1467]: linux-arm64/helm May 17 00:12:11.693872 update_engine[1458]: I20250517 00:12:11.691910 1458 main.cc:92] Flatcar Update Engine starting May 17 00:12:11.707086 jq[1485]: true May 17 00:12:11.711828 update_engine[1458]: I20250517 00:12:11.709993 1458 update_check_scheduler.cc:74] Next update check in 4m25s May 17 00:12:11.714814 systemd[1]: Started update-engine.service - Update Engine. May 17 00:12:11.722243 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 00:12:11.741054 systemd-logind[1457]: New seat seat0. May 17 00:12:11.742054 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 17 00:12:11.742759 systemd-logind[1457]: Watching system buttons on /dev/input/event0 (Power Button) May 17 00:12:11.742775 systemd-logind[1457]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) May 17 00:12:11.742911 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 17 00:12:11.743455 systemd[1]: Started systemd-logind.service - User Login Management. May 17 00:12:11.829316 bash[1514]: Updated "/home/core/.ssh/authorized_keys" May 17 00:12:11.830958 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 00:12:11.841102 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 43 scanned by (udev-worker) (1385) May 17 00:12:11.872506 kernel: EXT4-fs (sda9): resized filesystem to 9393147 May 17 00:12:11.861152 systemd[1]: Starting sshkeys.service... May 17 00:12:11.882725 extend-filesystems[1484]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 17 00:12:11.882725 extend-filesystems[1484]: old_desc_blocks = 1, new_desc_blocks = 5 May 17 00:12:11.882725 extend-filesystems[1484]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. May 17 00:12:11.895597 extend-filesystems[1448]: Resized filesystem in /dev/sda9 May 17 00:12:11.895597 extend-filesystems[1448]: Found sr0 May 17 00:12:11.886752 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:12:11.886929 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 00:12:11.917847 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 17 00:12:11.924737 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 17 00:12:11.999598 coreos-metadata[1523]: May 17 00:12:11.999 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 May 17 00:12:12.003038 coreos-metadata[1523]: May 17 00:12:12.002 INFO Fetch successful May 17 00:12:12.006688 unknown[1523]: wrote ssh authorized keys file for user: core May 17 00:12:12.050653 update-ssh-keys[1533]: Updated "/home/core/.ssh/authorized_keys" May 17 00:12:12.051726 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 17 00:12:12.057462 systemd[1]: Finished sshkeys.service. May 17 00:12:12.064389 containerd[1479]: time="2025-05-17T00:12:12.062793880Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 00:12:12.083529 locksmithd[1498]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:12:12.118050 containerd[1479]: time="2025-05-17T00:12:12.117993880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:12:12.122083 containerd[1479]: time="2025-05-17T00:12:12.122022920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:12:12.122083 containerd[1479]: time="2025-05-17T00:12:12.122071120Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:12:12.122083 containerd[1479]: time="2025-05-17T00:12:12.122089720Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:12:12.122726 containerd[1479]: time="2025-05-17T00:12:12.122331080Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 00:12:12.122726 containerd[1479]: time="2025-05-17T00:12:12.122384520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 00:12:12.122726 containerd[1479]: time="2025-05-17T00:12:12.122461200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:12:12.122726 containerd[1479]: time="2025-05-17T00:12:12.122474360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:12:12.122726 containerd[1479]: time="2025-05-17T00:12:12.122661400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:12:12.122726 containerd[1479]: time="2025-05-17T00:12:12.122678080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:12:12.122726 containerd[1479]: time="2025-05-17T00:12:12.122691440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:12:12.122726 containerd[1479]: time="2025-05-17T00:12:12.122702040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:12:12.122895 containerd[1479]: time="2025-05-17T00:12:12.122778400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:12:12.123013 containerd[1479]: time="2025-05-17T00:12:12.122979920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:12:12.123130 containerd[1479]: time="2025-05-17T00:12:12.123108080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:12:12.123130 containerd[1479]: time="2025-05-17T00:12:12.123126560Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:12:12.123256 containerd[1479]: time="2025-05-17T00:12:12.123233720Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:12:12.123309 containerd[1479]: time="2025-05-17T00:12:12.123292680Z" level=info msg="metadata content store policy set" policy=shared May 17 00:12:12.130507 containerd[1479]: time="2025-05-17T00:12:12.130284800Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:12:12.130507 containerd[1479]: time="2025-05-17T00:12:12.130383440Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:12:12.130507 containerd[1479]: time="2025-05-17T00:12:12.130402080Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 00:12:12.130507 containerd[1479]: time="2025-05-17T00:12:12.130418040Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 00:12:12.131603 containerd[1479]: time="2025-05-17T00:12:12.131445360Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:12:12.131669 containerd[1479]: time="2025-05-17T00:12:12.131648240Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:12:12.132129 containerd[1479]: time="2025-05-17T00:12:12.131963400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:12:12.132129 containerd[1479]: time="2025-05-17T00:12:12.132118240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 00:12:12.132267 containerd[1479]: time="2025-05-17T00:12:12.132137200Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 00:12:12.132267 containerd[1479]: time="2025-05-17T00:12:12.132151080Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 00:12:12.132267 containerd[1479]: time="2025-05-17T00:12:12.132164440Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:12:12.132267 containerd[1479]: time="2025-05-17T00:12:12.132178560Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:12:12.132267 containerd[1479]: time="2025-05-17T00:12:12.132240400Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:12:12.132267 containerd[1479]: time="2025-05-17T00:12:12.132257720Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:12:12.132396 containerd[1479]: time="2025-05-17T00:12:12.132274840Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:12:12.132396 containerd[1479]: time="2025-05-17T00:12:12.132288400Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:12:12.132396 containerd[1479]: time="2025-05-17T00:12:12.132302080Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:12:12.132396 containerd[1479]: time="2025-05-17T00:12:12.132313880Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:12:12.132396 containerd[1479]: time="2025-05-17T00:12:12.132334800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:12:12.135429 containerd[1479]: time="2025-05-17T00:12:12.132349800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:12:12.135429 containerd[1479]: time="2025-05-17T00:12:12.134396120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:12:12.135429 containerd[1479]: time="2025-05-17T00:12:12.134428040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:12:12.135429 containerd[1479]: time="2025-05-17T00:12:12.134441560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:12:12.135429 containerd[1479]: time="2025-05-17T00:12:12.134455880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:12:12.135429 containerd[1479]: time="2025-05-17T00:12:12.134469040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:12:12.135429 containerd[1479]: time="2025-05-17T00:12:12.134482800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:12:12.135429 containerd[1479]: time="2025-05-17T00:12:12.134497560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 00:12:12.135429 containerd[1479]: time="2025-05-17T00:12:12.134515480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 00:12:12.135429 containerd[1479]: time="2025-05-17T00:12:12.134527400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:12:12.135429 containerd[1479]: time="2025-05-17T00:12:12.134540840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 00:12:12.135429 containerd[1479]: time="2025-05-17T00:12:12.134554120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:12:12.135429 containerd[1479]: time="2025-05-17T00:12:12.134569840Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 00:12:12.135429 containerd[1479]: time="2025-05-17T00:12:12.134596320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 00:12:12.135429 containerd[1479]: time="2025-05-17T00:12:12.134612720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:12:12.135778 containerd[1479]: time="2025-05-17T00:12:12.134624840Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:12:12.135778 containerd[1479]: time="2025-05-17T00:12:12.135549000Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:12:12.135778 containerd[1479]: time="2025-05-17T00:12:12.135740720Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 00:12:12.135778 containerd[1479]: time="2025-05-17T00:12:12.135755160Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:12:12.135778 containerd[1479]: time="2025-05-17T00:12:12.135768880Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 00:12:12.135864 containerd[1479]: time="2025-05-17T00:12:12.135779680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:12:12.135864 containerd[1479]: time="2025-05-17T00:12:12.135796360Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 00:12:12.135864 containerd[1479]: time="2025-05-17T00:12:12.135816120Z" level=info msg="NRI interface is disabled by configuration." May 17 00:12:12.135864 containerd[1479]: time="2025-05-17T00:12:12.135830360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:12:12.136301 containerd[1479]: time="2025-05-17T00:12:12.136226000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:12:12.136461 containerd[1479]: time="2025-05-17T00:12:12.136300280Z" level=info msg="Connect containerd service" May 17 00:12:12.136461 containerd[1479]: time="2025-05-17T00:12:12.136339400Z" level=info msg="using legacy CRI server" May 17 00:12:12.136461 containerd[1479]: time="2025-05-17T00:12:12.136346400Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 00:12:12.138577 containerd[1479]: time="2025-05-17T00:12:12.138467880Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:12:12.139650 containerd[1479]: time="2025-05-17T00:12:12.139347000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:12:12.139650 containerd[1479]: time="2025-05-17T00:12:12.139581800Z" level=info msg="Start subscribing containerd event" May 17 00:12:12.139650 containerd[1479]: time="2025-05-17T00:12:12.139636080Z" level=info msg="Start recovering state" May 17 00:12:12.139735 containerd[1479]: time="2025-05-17T00:12:12.139709920Z" level=info msg="Start event monitor" May 17 00:12:12.139735 containerd[1479]: time="2025-05-17T00:12:12.139724160Z" level=info msg="Start snapshots syncer" May 17 00:12:12.139735 containerd[1479]: time="2025-05-17T00:12:12.139732840Z" level=info msg="Start cni network conf syncer for default" May 17 00:12:12.139805 containerd[1479]: time="2025-05-17T00:12:12.139740160Z" level=info msg="Start streaming server" May 17 00:12:12.146765 containerd[1479]: time="2025-05-17T00:12:12.145799120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:12:12.146765 containerd[1479]: time="2025-05-17T00:12:12.145880240Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:12:12.146765 containerd[1479]: time="2025-05-17T00:12:12.145947760Z" level=info msg="containerd successfully booted in 0.093301s" May 17 00:12:12.146108 systemd[1]: Started containerd.service - containerd container runtime. May 17 00:12:12.242528 systemd-networkd[1369]: eth0: Gained IPv6LL May 17 00:12:12.243823 systemd-timesyncd[1356]: Network configuration changed, trying to establish connection. May 17 00:12:12.247933 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 00:12:12.250012 systemd[1]: Reached target network-online.target - Network is Online. May 17 00:12:12.259657 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:12:12.267766 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 00:12:12.314612 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 00:12:12.673840 tar[1467]: linux-arm64/README.md May 17 00:12:12.695942 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 00:12:12.807243 sshd_keygen[1491]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:12:12.830408 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 00:12:12.841648 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 00:12:12.849635 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:12:12.849895 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 00:12:12.858214 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 00:12:12.868873 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 00:12:12.881277 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 00:12:12.884379 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 17 00:12:12.885532 systemd[1]: Reached target getty.target - Login Prompts. May 17 00:12:13.073677 systemd-networkd[1369]: eth1: Gained IPv6LL May 17 00:12:13.074461 systemd-timesyncd[1356]: Network configuration changed, trying to establish connection. May 17 00:12:13.153701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:12:13.156136 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 00:12:13.157414 systemd[1]: Startup finished in 828ms (kernel) + 6.083s (initrd) + 4.403s (userspace) = 11.315s. May 17 00:12:13.157964 (kubelet)[1574]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:12:13.665983 kubelet[1574]: E0517 00:12:13.665886 1574 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:12:13.669901 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:12:13.670136 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:12:23.755694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:12:23.762718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:12:23.880537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:12:23.899220 (kubelet)[1592]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:12:23.950750 kubelet[1592]: E0517 00:12:23.950679 1592 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:12:23.955963 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:12:23.956136 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:12:34.005284 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:12:34.022670 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:12:34.153610 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:12:34.156742 (kubelet)[1607]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:12:34.194649 kubelet[1607]: E0517 00:12:34.194598 1607 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:12:34.198137 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:12:34.198303 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:12:43.735135 systemd-resolved[1328]: Clock change detected. Flushing caches. May 17 00:12:43.735363 systemd-timesyncd[1356]: Contacted time server 168.119.211.223:123 (2.flatcar.pool.ntp.org). May 17 00:12:43.735447 systemd-timesyncd[1356]: Initial clock synchronization to Sat 2025-05-17 00:12:43.735089 UTC. May 17 00:12:44.731874 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 17 00:12:44.741759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:12:44.877736 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:12:44.879733 (kubelet)[1621]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:12:44.928264 kubelet[1621]: E0517 00:12:44.928176 1621 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:12:44.931382 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:12:44.931591 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:12:54.982019 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 17 00:12:54.987810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:12:55.174776 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:12:55.180478 (kubelet)[1636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:12:55.236017 kubelet[1636]: E0517 00:12:55.235595 1636 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:12:55.239034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:12:55.239207 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:12:57.355606 update_engine[1458]: I20250517 00:12:57.354997 1458 update_attempter.cc:509] Updating boot flags... May 17 00:12:57.406465 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 43 scanned by (udev-worker) (1652) May 17 00:12:57.472684 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 43 scanned by (udev-worker) (1653) May 17 00:13:05.481513 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 17 00:13:05.490798 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:13:05.631920 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:13:05.633668 (kubelet)[1669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:13:05.678067 kubelet[1669]: E0517 00:13:05.677913 1669 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:13:05.680874 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:13:05.681057 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:13:15.731607 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 17 00:13:15.737895 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:13:15.868769 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:13:15.881088 (kubelet)[1684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:13:15.931210 kubelet[1684]: E0517 00:13:15.931125 1684 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:13:15.934308 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:13:15.935025 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:13:25.981925 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. May 17 00:13:25.994979 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:13:26.144171 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:13:26.149954 (kubelet)[1699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:13:26.200229 kubelet[1699]: E0517 00:13:26.200160 1699 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:13:26.203013 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:13:26.203181 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:13:36.231613 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. May 17 00:13:36.238815 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:13:36.374699 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:13:36.381148 (kubelet)[1715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:13:36.428076 kubelet[1715]: E0517 00:13:36.428028 1715 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:13:36.432698 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:13:36.433023 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:13:46.481814 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. May 17 00:13:46.488641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:13:46.626472 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:13:46.638031 (kubelet)[1730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:13:46.687525 kubelet[1730]: E0517 00:13:46.687441 1730 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:13:46.691276 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:13:46.691456 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:13:51.779959 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 00:13:51.787966 systemd[1]: Started sshd@0-142.132.181.146:22-139.178.68.195:43408.service - OpenSSH per-connection server daemon (139.178.68.195:43408). May 17 00:13:52.791578 sshd[1738]: Accepted publickey for core from 139.178.68.195 port 43408 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:13:52.794701 sshd[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:52.803665 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 00:13:52.812937 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 00:13:52.817581 systemd-logind[1457]: New session 1 of user core. May 17 00:13:52.829131 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 00:13:52.839956 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 00:13:52.843538 (systemd)[1742]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:13:52.959878 systemd[1742]: Queued start job for default target default.target. May 17 00:13:52.970590 systemd[1742]: Created slice app.slice - User Application Slice. May 17 00:13:52.970719 systemd[1742]: Reached target paths.target - Paths. May 17 00:13:52.970753 systemd[1742]: Reached target timers.target - Timers. May 17 00:13:52.972876 systemd[1742]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 00:13:52.988263 systemd[1742]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 00:13:52.988561 systemd[1742]: Reached target sockets.target - Sockets. May 17 00:13:52.988593 systemd[1742]: Reached target basic.target - Basic System. May 17 00:13:52.988710 systemd[1742]: Reached target default.target - Main User Target. May 17 00:13:52.988766 systemd[1742]: Startup finished in 138ms. May 17 00:13:52.989675 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 00:13:52.997771 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 00:13:53.705847 systemd[1]: Started sshd@1-142.132.181.146:22-139.178.68.195:43414.service - OpenSSH per-connection server daemon (139.178.68.195:43414). May 17 00:13:54.690503 sshd[1753]: Accepted publickey for core from 139.178.68.195 port 43414 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:13:54.692310 sshd[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:54.699016 systemd-logind[1457]: New session 2 of user core. May 17 00:13:54.708775 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 00:13:55.381247 sshd[1753]: pam_unix(sshd:session): session closed for user core May 17 00:13:55.385570 systemd-logind[1457]: Session 2 logged out. Waiting for processes to exit. May 17 00:13:55.387670 systemd[1]: sshd@1-142.132.181.146:22-139.178.68.195:43414.service: Deactivated successfully. May 17 00:13:55.389968 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:13:55.392372 systemd-logind[1457]: Removed session 2. May 17 00:13:55.561986 systemd[1]: Started sshd@2-142.132.181.146:22-139.178.68.195:49424.service - OpenSSH per-connection server daemon (139.178.68.195:49424). May 17 00:13:56.544961 sshd[1760]: Accepted publickey for core from 139.178.68.195 port 49424 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:13:56.547715 sshd[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:56.556873 systemd-logind[1457]: New session 3 of user core. May 17 00:13:56.559727 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 00:13:56.731650 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. May 17 00:13:56.738905 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:13:56.888850 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:13:56.890501 (kubelet)[1771]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:13:56.941862 kubelet[1771]: E0517 00:13:56.941767 1771 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:13:56.945169 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:13:56.945339 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:13:57.226495 sshd[1760]: pam_unix(sshd:session): session closed for user core May 17 00:13:57.231683 systemd-logind[1457]: Session 3 logged out. Waiting for processes to exit. May 17 00:13:57.232418 systemd[1]: sshd@2-142.132.181.146:22-139.178.68.195:49424.service: Deactivated successfully. May 17 00:13:57.235984 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:13:57.237914 systemd-logind[1457]: Removed session 3. May 17 00:13:57.403967 systemd[1]: Started sshd@3-142.132.181.146:22-139.178.68.195:49432.service - OpenSSH per-connection server daemon (139.178.68.195:49432). May 17 00:13:58.387154 sshd[1782]: Accepted publickey for core from 139.178.68.195 port 49432 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:13:58.389512 sshd[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:58.395518 systemd-logind[1457]: New session 4 of user core. May 17 00:13:58.402738 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 00:13:59.072895 sshd[1782]: pam_unix(sshd:session): session closed for user core May 17 00:13:59.079820 systemd-logind[1457]: Session 4 logged out. Waiting for processes to exit. May 17 00:13:59.081552 systemd[1]: sshd@3-142.132.181.146:22-139.178.68.195:49432.service: Deactivated successfully. May 17 00:13:59.083892 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:13:59.086465 systemd-logind[1457]: Removed session 4. May 17 00:13:59.257835 systemd[1]: Started sshd@4-142.132.181.146:22-139.178.68.195:49440.service - OpenSSH per-connection server daemon (139.178.68.195:49440). May 17 00:14:00.258228 sshd[1789]: Accepted publickey for core from 139.178.68.195 port 49440 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:00.261421 sshd[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:00.267515 systemd-logind[1457]: New session 5 of user core. May 17 00:14:00.277864 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 00:14:00.804481 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:14:00.804850 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:14:00.817860 sudo[1792]: pam_unix(sudo:session): session closed for user root May 17 00:14:00.981939 sshd[1789]: pam_unix(sshd:session): session closed for user core May 17 00:14:00.987995 systemd-logind[1457]: Session 5 logged out. Waiting for processes to exit. May 17 00:14:00.989315 systemd[1]: sshd@4-142.132.181.146:22-139.178.68.195:49440.service: Deactivated successfully. May 17 00:14:00.992849 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:14:00.994467 systemd-logind[1457]: Removed session 5. May 17 00:14:01.162949 systemd[1]: Started sshd@5-142.132.181.146:22-139.178.68.195:49450.service - OpenSSH per-connection server daemon (139.178.68.195:49450). May 17 00:14:02.162130 sshd[1797]: Accepted publickey for core from 139.178.68.195 port 49450 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:02.164450 sshd[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:02.171665 systemd-logind[1457]: New session 6 of user core. May 17 00:14:02.175091 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 00:14:02.698397 sudo[1801]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:14:02.698744 sudo[1801]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:14:02.703887 sudo[1801]: pam_unix(sudo:session): session closed for user root May 17 00:14:02.711899 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:14:02.712220 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:14:02.735996 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 00:14:02.738073 auditctl[1804]: No rules May 17 00:14:02.739130 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:14:02.739498 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 00:14:02.743417 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:14:02.778315 augenrules[1822]: No rules May 17 00:14:02.780200 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:14:02.782756 sudo[1800]: pam_unix(sudo:session): session closed for user root May 17 00:14:02.948941 sshd[1797]: pam_unix(sshd:session): session closed for user core May 17 00:14:02.954514 systemd[1]: sshd@5-142.132.181.146:22-139.178.68.195:49450.service: Deactivated successfully. May 17 00:14:02.956144 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:14:02.960932 systemd-logind[1457]: Session 6 logged out. Waiting for processes to exit. May 17 00:14:02.963707 systemd-logind[1457]: Removed session 6. May 17 00:14:03.118858 systemd[1]: Started sshd@6-142.132.181.146:22-139.178.68.195:49458.service - OpenSSH per-connection server daemon (139.178.68.195:49458). May 17 00:14:04.111364 sshd[1830]: Accepted publickey for core from 139.178.68.195 port 49458 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:04.113831 sshd[1830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:04.120599 systemd-logind[1457]: New session 7 of user core. May 17 00:14:04.126753 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 00:14:04.636913 sudo[1833]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:14:04.637194 sudo[1833]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:14:04.982916 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 00:14:04.983164 (dockerd)[1848]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 00:14:05.272549 dockerd[1848]: time="2025-05-17T00:14:05.272226665Z" level=info msg="Starting up" May 17 00:14:05.389478 dockerd[1848]: time="2025-05-17T00:14:05.388902940Z" level=info msg="Loading containers: start." May 17 00:14:05.510454 kernel: Initializing XFRM netlink socket May 17 00:14:05.607236 systemd-networkd[1369]: docker0: Link UP May 17 00:14:05.631384 dockerd[1848]: time="2025-05-17T00:14:05.631290094Z" level=info msg="Loading containers: done." May 17 00:14:05.655934 dockerd[1848]: time="2025-05-17T00:14:05.655722029Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:14:05.655934 dockerd[1848]: time="2025-05-17T00:14:05.655882190Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 00:14:05.656214 dockerd[1848]: time="2025-05-17T00:14:05.656047190Z" level=info msg="Daemon has completed initialization" May 17 00:14:05.705049 dockerd[1848]: time="2025-05-17T00:14:05.703755820Z" level=info msg="API listen on /run/docker.sock" May 17 00:14:05.704117 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 00:14:06.782268 containerd[1479]: time="2025-05-17T00:14:06.782180236Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 17 00:14:06.981317 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. May 17 00:14:06.988852 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:14:07.112854 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:14:07.133500 (kubelet)[1994]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:14:07.187546 kubelet[1994]: E0517 00:14:07.187476 1994 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:14:07.190270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:14:07.190712 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:14:07.483011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2698865806.mount: Deactivated successfully. May 17 00:14:09.546595 containerd[1479]: time="2025-05-17T00:14:09.546027840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:09.548795 containerd[1479]: time="2025-05-17T00:14:09.547994681Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=26326403" May 17 00:14:09.550864 containerd[1479]: time="2025-05-17T00:14:09.550783362Z" level=info msg="ImageCreate event name:\"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:09.555393 containerd[1479]: time="2025-05-17T00:14:09.555338764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:09.556897 containerd[1479]: time="2025-05-17T00:14:09.556548525Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"26323111\" in 2.774268249s" May 17 00:14:09.556897 containerd[1479]: time="2025-05-17T00:14:09.556603645Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\"" May 17 00:14:09.557700 containerd[1479]: time="2025-05-17T00:14:09.557598245Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 17 00:14:11.489098 containerd[1479]: time="2025-05-17T00:14:11.488895856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:11.491636 containerd[1479]: time="2025-05-17T00:14:11.491583377Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=22530567" May 17 00:14:11.492330 containerd[1479]: time="2025-05-17T00:14:11.491879697Z" level=info msg="ImageCreate event name:\"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:11.496567 containerd[1479]: time="2025-05-17T00:14:11.496397939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:11.498908 containerd[1479]: time="2025-05-17T00:14:11.498829380Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"24066313\" in 1.940937895s" May 17 00:14:11.499408 containerd[1479]: time="2025-05-17T00:14:11.499101580Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\"" May 17 00:14:11.499964 containerd[1479]: time="2025-05-17T00:14:11.499897021Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 17 00:14:13.143741 containerd[1479]: time="2025-05-17T00:14:13.143618697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:13.145703 containerd[1479]: time="2025-05-17T00:14:13.145627418Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=17484210" May 17 00:14:13.147399 containerd[1479]: time="2025-05-17T00:14:13.147325819Z" level=info msg="ImageCreate event name:\"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:13.152101 containerd[1479]: time="2025-05-17T00:14:13.152027700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:13.154014 containerd[1479]: time="2025-05-17T00:14:13.153695661Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"19019974\" in 1.6537508s" May 17 00:14:13.154014 containerd[1479]: time="2025-05-17T00:14:13.153752181Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\"" May 17 00:14:13.154943 containerd[1479]: time="2025-05-17T00:14:13.154905621Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 17 00:14:14.167295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1644229332.mount: Deactivated successfully. May 17 00:14:14.491502 containerd[1479]: time="2025-05-17T00:14:14.491395198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:14.492877 containerd[1479]: time="2025-05-17T00:14:14.492637598Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=27377401" May 17 00:14:14.493846 containerd[1479]: time="2025-05-17T00:14:14.493767999Z" level=info msg="ImageCreate event name:\"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:14.499056 containerd[1479]: time="2025-05-17T00:14:14.498950440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:14.500379 containerd[1479]: time="2025-05-17T00:14:14.499351161Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"27376394\" in 1.34440262s" May 17 00:14:14.500379 containerd[1479]: time="2025-05-17T00:14:14.499391481Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\"" May 17 00:14:14.500379 containerd[1479]: time="2025-05-17T00:14:14.500116001Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:14:15.069305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount155811590.mount: Deactivated successfully. May 17 00:14:15.841508 containerd[1479]: time="2025-05-17T00:14:15.841076740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:15.844622 containerd[1479]: time="2025-05-17T00:14:15.844533581Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" May 17 00:14:15.846644 containerd[1479]: time="2025-05-17T00:14:15.846561582Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:15.853458 containerd[1479]: time="2025-05-17T00:14:15.852149384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:15.853669 containerd[1479]: time="2025-05-17T00:14:15.853633624Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.353470063s" May 17 00:14:15.853827 containerd[1479]: time="2025-05-17T00:14:15.853771704Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 17 00:14:15.854490 containerd[1479]: time="2025-05-17T00:14:15.854383864Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:14:16.351656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3013779975.mount: Deactivated successfully. May 17 00:14:16.359528 containerd[1479]: time="2025-05-17T00:14:16.359482146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:16.360820 containerd[1479]: time="2025-05-17T00:14:16.360596506Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" May 17 00:14:16.361738 containerd[1479]: time="2025-05-17T00:14:16.361674186Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:16.365396 containerd[1479]: time="2025-05-17T00:14:16.364511867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:16.365396 containerd[1479]: time="2025-05-17T00:14:16.365272387Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 510.840763ms" May 17 00:14:16.365396 containerd[1479]: time="2025-05-17T00:14:16.365306388Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 17 00:14:16.366158 containerd[1479]: time="2025-05-17T00:14:16.366041628Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 17 00:14:16.962278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2496723651.mount: Deactivated successfully. May 17 00:14:17.231452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. May 17 00:14:17.238117 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:14:17.396156 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:14:17.398037 (kubelet)[2184]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:14:17.443618 kubelet[2184]: E0517 00:14:17.443543 2184 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:14:17.446621 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:14:17.446868 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:14:21.318334 containerd[1479]: time="2025-05-17T00:14:21.318256430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:21.320109 containerd[1479]: time="2025-05-17T00:14:21.320058821Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812537" May 17 00:14:21.320926 containerd[1479]: time="2025-05-17T00:14:21.320507828Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:21.326332 containerd[1479]: time="2025-05-17T00:14:21.326238606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:21.328183 containerd[1479]: time="2025-05-17T00:14:21.327882554Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 4.961607206s" May 17 00:14:21.328183 containerd[1479]: time="2025-05-17T00:14:21.327927315Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 17 00:14:26.603308 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:14:26.609871 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:14:26.652755 systemd[1]: Reloading requested from client PID 2224 ('systemctl') (unit session-7.scope)... May 17 00:14:26.652929 systemd[1]: Reloading... May 17 00:14:26.767473 zram_generator::config[2267]: No configuration found. May 17 00:14:26.876699 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:14:26.947388 systemd[1]: Reloading finished in 294 ms. May 17 00:14:27.005217 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 00:14:27.005320 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 00:14:27.005877 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:14:27.013171 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:14:27.137551 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:14:27.145072 (kubelet)[2311]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:14:27.191319 kubelet[2311]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:14:27.191319 kubelet[2311]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:14:27.191319 kubelet[2311]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:14:27.191809 kubelet[2311]: I0517 00:14:27.191368 2311 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:14:28.808314 kubelet[2311]: I0517 00:14:28.808236 2311 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 00:14:28.808314 kubelet[2311]: I0517 00:14:28.808286 2311 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:14:28.808830 kubelet[2311]: I0517 00:14:28.808621 2311 server.go:954] "Client rotation is on, will bootstrap in background" May 17 00:14:28.838303 kubelet[2311]: E0517 00:14:28.838236 2311 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://142.132.181.146:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 142.132.181.146:6443: connect: connection refused" logger="UnhandledError" May 17 00:14:28.842463 kubelet[2311]: I0517 00:14:28.842115 2311 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:14:28.849564 kubelet[2311]: E0517 00:14:28.849519 2311 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:14:28.849564 kubelet[2311]: I0517 00:14:28.849561 2311 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:14:28.852162 kubelet[2311]: I0517 00:14:28.852135 2311 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:14:28.853350 kubelet[2311]: I0517 00:14:28.853277 2311 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:14:28.853652 kubelet[2311]: I0517 00:14:28.853345 2311 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-16326e39d6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:14:28.853765 kubelet[2311]: I0517 00:14:28.853719 2311 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:14:28.853765 kubelet[2311]: I0517 00:14:28.853732 2311 container_manager_linux.go:304] "Creating device plugin manager" May 17 00:14:28.854004 kubelet[2311]: I0517 00:14:28.853971 2311 state_mem.go:36] "Initialized new in-memory state store" May 17 00:14:28.857556 kubelet[2311]: I0517 00:14:28.857503 2311 kubelet.go:446] "Attempting to sync node with API server" May 17 00:14:28.857556 kubelet[2311]: I0517 00:14:28.857540 2311 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:14:28.857556 kubelet[2311]: I0517 00:14:28.857564 2311 kubelet.go:352] "Adding apiserver pod source" May 17 00:14:28.857556 kubelet[2311]: I0517 00:14:28.857575 2311 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:14:28.864527 kubelet[2311]: W0517 00:14:28.864191 2311 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://142.132.181.146:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 142.132.181.146:6443: connect: connection refused May 17 00:14:28.864527 kubelet[2311]: E0517 00:14:28.864258 2311 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://142.132.181.146:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 142.132.181.146:6443: connect: connection refused" logger="UnhandledError" May 17 00:14:28.864527 kubelet[2311]: W0517 00:14:28.864339 2311 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://142.132.181.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-16326e39d6&limit=500&resourceVersion=0": dial tcp 142.132.181.146:6443: connect: connection refused May 17 00:14:28.864527 kubelet[2311]: E0517 00:14:28.864368 2311 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://142.132.181.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-16326e39d6&limit=500&resourceVersion=0\": dial tcp 142.132.181.146:6443: connect: connection refused" logger="UnhandledError" May 17 00:14:28.866496 kubelet[2311]: I0517 00:14:28.864853 2311 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:14:28.866496 kubelet[2311]: I0517 00:14:28.865578 2311 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:14:28.866496 kubelet[2311]: W0517 00:14:28.865709 2311 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:14:28.868598 kubelet[2311]: I0517 00:14:28.868567 2311 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:14:28.868800 kubelet[2311]: I0517 00:14:28.868787 2311 server.go:1287] "Started kubelet" May 17 00:14:28.874203 kubelet[2311]: E0517 00:14:28.873907 2311 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://142.132.181.146:6443/api/v1/namespaces/default/events\": dial tcp 142.132.181.146:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-n-16326e39d6.184028363eec1bf6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-n-16326e39d6,UID:ci-4081-3-3-n-16326e39d6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-16326e39d6,},FirstTimestamp:2025-05-17 00:14:28.868758518 +0000 UTC m=+1.719907015,LastTimestamp:2025-05-17 00:14:28.868758518 +0000 UTC m=+1.719907015,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-16326e39d6,}" May 17 00:14:28.875012 kubelet[2311]: I0517 00:14:28.874986 2311 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:14:28.877718 kubelet[2311]: I0517 00:14:28.877655 2311 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:14:28.878743 kubelet[2311]: I0517 00:14:28.878698 2311 server.go:479] "Adding debug handlers to kubelet server" May 17 00:14:28.881526 kubelet[2311]: I0517 00:14:28.881402 2311 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:14:28.881951 kubelet[2311]: I0517 00:14:28.881929 2311 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:14:28.882093 kubelet[2311]: I0517 00:14:28.881980 2311 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:14:28.882262 kubelet[2311]: I0517 00:14:28.882249 2311 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:14:28.882804 kubelet[2311]: E0517 00:14:28.882776 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-16326e39d6\" not found" May 17 00:14:28.884026 kubelet[2311]: I0517 00:14:28.884006 2311 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:14:28.884279 kubelet[2311]: I0517 00:14:28.884264 2311 reconciler.go:26] "Reconciler: start to sync state" May 17 00:14:28.885420 kubelet[2311]: W0517 00:14:28.885377 2311 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://142.132.181.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 142.132.181.146:6443: connect: connection refused May 17 00:14:28.885680 kubelet[2311]: E0517 00:14:28.885660 2311 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://142.132.181.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 142.132.181.146:6443: connect: connection refused" logger="UnhandledError" May 17 00:14:28.886098 kubelet[2311]: E0517 00:14:28.886058 2311 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://142.132.181.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-16326e39d6?timeout=10s\": dial tcp 142.132.181.146:6443: connect: connection refused" interval="200ms" May 17 00:14:28.887151 kubelet[2311]: I0517 00:14:28.887125 2311 factory.go:221] Registration of the systemd container factory successfully May 17 00:14:28.887407 kubelet[2311]: I0517 00:14:28.887387 2311 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:14:28.890449 kubelet[2311]: E0517 00:14:28.889531 2311 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:14:28.890449 kubelet[2311]: I0517 00:14:28.889812 2311 factory.go:221] Registration of the containerd container factory successfully May 17 00:14:28.905079 kubelet[2311]: I0517 00:14:28.905021 2311 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:14:28.907223 kubelet[2311]: I0517 00:14:28.907189 2311 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:14:28.907371 kubelet[2311]: I0517 00:14:28.907359 2311 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 00:14:28.907559 kubelet[2311]: I0517 00:14:28.907541 2311 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:14:28.907622 kubelet[2311]: I0517 00:14:28.907615 2311 kubelet.go:2382] "Starting kubelet main sync loop" May 17 00:14:28.907745 kubelet[2311]: E0517 00:14:28.907726 2311 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:14:28.915412 kubelet[2311]: W0517 00:14:28.915365 2311 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://142.132.181.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 142.132.181.146:6443: connect: connection refused May 17 00:14:28.915707 kubelet[2311]: E0517 00:14:28.915668 2311 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://142.132.181.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 142.132.181.146:6443: connect: connection refused" logger="UnhandledError" May 17 00:14:28.922858 kubelet[2311]: I0517 00:14:28.922828 2311 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:14:28.922858 kubelet[2311]: I0517 00:14:28.922853 2311 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:14:28.923011 kubelet[2311]: I0517 00:14:28.922898 2311 state_mem.go:36] "Initialized new in-memory state store" May 17 00:14:28.924881 kubelet[2311]: I0517 00:14:28.924854 2311 policy_none.go:49] "None policy: Start" May 17 00:14:28.924881 kubelet[2311]: I0517 00:14:28.924881 2311 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:14:28.925006 kubelet[2311]: I0517 00:14:28.924894 2311 state_mem.go:35] "Initializing new in-memory state store" May 17 00:14:28.931447 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 17 00:14:28.948461 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 17 00:14:28.953319 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 17 00:14:28.970524 kubelet[2311]: I0517 00:14:28.969560 2311 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:14:28.970524 kubelet[2311]: I0517 00:14:28.969970 2311 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:14:28.970524 kubelet[2311]: I0517 00:14:28.969996 2311 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:14:28.974271 kubelet[2311]: E0517 00:14:28.974200 2311 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:14:28.974462 kubelet[2311]: E0517 00:14:28.974286 2311 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-3-n-16326e39d6\" not found" May 17 00:14:28.975710 kubelet[2311]: I0517 00:14:28.974784 2311 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:14:29.022669 systemd[1]: Created slice kubepods-burstable-pod63ca978206f012db5d01e3627ba7053b.slice - libcontainer container kubepods-burstable-pod63ca978206f012db5d01e3627ba7053b.slice. May 17 00:14:29.033450 kubelet[2311]: E0517 00:14:29.033057 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-16326e39d6\" not found" node="ci-4081-3-3-n-16326e39d6" May 17 00:14:29.037415 systemd[1]: Created slice kubepods-burstable-pod53c9d49ee42905013be76c27162a6b34.slice - libcontainer container kubepods-burstable-pod53c9d49ee42905013be76c27162a6b34.slice. May 17 00:14:29.050855 kubelet[2311]: E0517 00:14:29.050801 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-16326e39d6\" not found" node="ci-4081-3-3-n-16326e39d6" May 17 00:14:29.057636 systemd[1]: Created slice kubepods-burstable-pod8e36d65b15ed0e86bce1f6c84936135f.slice - libcontainer container kubepods-burstable-pod8e36d65b15ed0e86bce1f6c84936135f.slice. May 17 00:14:29.060522 kubelet[2311]: E0517 00:14:29.060195 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-16326e39d6\" not found" node="ci-4081-3-3-n-16326e39d6" May 17 00:14:29.072982 kubelet[2311]: I0517 00:14:29.072945 2311 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-16326e39d6" May 17 00:14:29.074533 kubelet[2311]: E0517 00:14:29.074403 2311 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://142.132.181.146:6443/api/v1/nodes\": dial tcp 142.132.181.146:6443: connect: connection refused" node="ci-4081-3-3-n-16326e39d6" May 17 00:14:29.087564 kubelet[2311]: E0517 00:14:29.087498 2311 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://142.132.181.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-16326e39d6?timeout=10s\": dial tcp 142.132.181.146:6443: connect: connection refused" interval="400ms" May 17 00:14:29.185526 kubelet[2311]: I0517 00:14:29.185261 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/63ca978206f012db5d01e3627ba7053b-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-16326e39d6\" (UID: \"63ca978206f012db5d01e3627ba7053b\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-16326e39d6" May 17 00:14:29.185526 kubelet[2311]: I0517 00:14:29.185330 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/63ca978206f012db5d01e3627ba7053b-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-16326e39d6\" (UID: \"63ca978206f012db5d01e3627ba7053b\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-16326e39d6" May 17 00:14:29.185526 kubelet[2311]: I0517 00:14:29.185358 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8e36d65b15ed0e86bce1f6c84936135f-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-16326e39d6\" (UID: \"8e36d65b15ed0e86bce1f6c84936135f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-16326e39d6" May 17 00:14:29.185526 kubelet[2311]: I0517 00:14:29.185385 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8e36d65b15ed0e86bce1f6c84936135f-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-16326e39d6\" (UID: \"8e36d65b15ed0e86bce1f6c84936135f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-16326e39d6" May 17 00:14:29.185526 kubelet[2311]: I0517 00:14:29.185412 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8e36d65b15ed0e86bce1f6c84936135f-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-16326e39d6\" (UID: \"8e36d65b15ed0e86bce1f6c84936135f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-16326e39d6" May 17 00:14:29.185935 kubelet[2311]: I0517 00:14:29.185488 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8e36d65b15ed0e86bce1f6c84936135f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-16326e39d6\" (UID: \"8e36d65b15ed0e86bce1f6c84936135f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-16326e39d6" May 17 00:14:29.185935 kubelet[2311]: I0517 00:14:29.185524 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/63ca978206f012db5d01e3627ba7053b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-16326e39d6\" (UID: \"63ca978206f012db5d01e3627ba7053b\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-16326e39d6" May 17 00:14:29.185935 kubelet[2311]: I0517 00:14:29.185550 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8e36d65b15ed0e86bce1f6c84936135f-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-16326e39d6\" (UID: \"8e36d65b15ed0e86bce1f6c84936135f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-16326e39d6" May 17 00:14:29.185935 kubelet[2311]: I0517 00:14:29.185579 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/53c9d49ee42905013be76c27162a6b34-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-16326e39d6\" (UID: \"53c9d49ee42905013be76c27162a6b34\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-16326e39d6" May 17 00:14:29.277344 kubelet[2311]: I0517 00:14:29.277294 2311 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-16326e39d6" May 17 00:14:29.277859 kubelet[2311]: E0517 00:14:29.277809 2311 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://142.132.181.146:6443/api/v1/nodes\": dial tcp 142.132.181.146:6443: connect: connection refused" node="ci-4081-3-3-n-16326e39d6" May 17 00:14:29.335335 containerd[1479]: time="2025-05-17T00:14:29.335093594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-16326e39d6,Uid:63ca978206f012db5d01e3627ba7053b,Namespace:kube-system,Attempt:0,}" May 17 00:14:29.352870 containerd[1479]: time="2025-05-17T00:14:29.352707915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-16326e39d6,Uid:53c9d49ee42905013be76c27162a6b34,Namespace:kube-system,Attempt:0,}" May 17 00:14:29.367349 containerd[1479]: time="2025-05-17T00:14:29.366968510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-16326e39d6,Uid:8e36d65b15ed0e86bce1f6c84936135f,Namespace:kube-system,Attempt:0,}" May 17 00:14:29.489035 kubelet[2311]: E0517 00:14:29.488975 2311 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://142.132.181.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-16326e39d6?timeout=10s\": dial tcp 142.132.181.146:6443: connect: connection refused" interval="800ms" May 17 00:14:29.680996 kubelet[2311]: I0517 00:14:29.680775 2311 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-16326e39d6" May 17 00:14:29.681594 kubelet[2311]: E0517 00:14:29.681354 2311 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://142.132.181.146:6443/api/v1/nodes\": dial tcp 142.132.181.146:6443: connect: connection refused" node="ci-4081-3-3-n-16326e39d6" May 17 00:14:29.854032 kubelet[2311]: W0517 00:14:29.853966 2311 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://142.132.181.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 142.132.181.146:6443: connect: connection refused May 17 00:14:29.854635 kubelet[2311]: E0517 00:14:29.854052 2311 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://142.132.181.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 142.132.181.146:6443: connect: connection refused" logger="UnhandledError" May 17 00:14:29.854777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2231370217.mount: Deactivated successfully. May 17 00:14:29.858823 containerd[1479]: time="2025-05-17T00:14:29.858745082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:14:29.860764 containerd[1479]: time="2025-05-17T00:14:29.860714309Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" May 17 00:14:29.863155 containerd[1479]: time="2025-05-17T00:14:29.863101182Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:14:29.863971 containerd[1479]: time="2025-05-17T00:14:29.863915513Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:14:29.865784 containerd[1479]: time="2025-05-17T00:14:29.865730978Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:14:29.866248 containerd[1479]: time="2025-05-17T00:14:29.866143064Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:14:29.866851 containerd[1479]: time="2025-05-17T00:14:29.866498029Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:14:29.873502 containerd[1479]: time="2025-05-17T00:14:29.873405083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:14:29.876208 containerd[1479]: time="2025-05-17T00:14:29.876155961Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 523.234003ms" May 17 00:14:29.878875 containerd[1479]: time="2025-05-17T00:14:29.878827877Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 511.776606ms" May 17 00:14:29.879402 containerd[1479]: time="2025-05-17T00:14:29.879368645Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 544.17541ms" May 17 00:14:30.006807 containerd[1479]: time="2025-05-17T00:14:30.006369662Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:30.008608 containerd[1479]: time="2025-05-17T00:14:30.006628985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:30.008608 containerd[1479]: time="2025-05-17T00:14:30.006647265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:30.008608 containerd[1479]: time="2025-05-17T00:14:30.006760707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:30.012453 containerd[1479]: time="2025-05-17T00:14:30.011323848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:30.012453 containerd[1479]: time="2025-05-17T00:14:30.011593531Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:30.012453 containerd[1479]: time="2025-05-17T00:14:30.011673932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:30.013611 containerd[1479]: time="2025-05-17T00:14:30.013394475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:30.016275 containerd[1479]: time="2025-05-17T00:14:30.015994670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:30.017665 containerd[1479]: time="2025-05-17T00:14:30.017417609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:30.017665 containerd[1479]: time="2025-05-17T00:14:30.017573331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:30.021704 containerd[1479]: time="2025-05-17T00:14:30.018039457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:30.045711 systemd[1]: Started cri-containerd-af7595122e6ad4847c7c08cb14a38588feae02bce8ee8dc7071d2de2ea1c1ca0.scope - libcontainer container af7595122e6ad4847c7c08cb14a38588feae02bce8ee8dc7071d2de2ea1c1ca0. May 17 00:14:30.049150 systemd[1]: Started cri-containerd-b2d0bf8ee677e73787992519f96b4c12c16204f6eff86ae0276e72fdac459070.scope - libcontainer container b2d0bf8ee677e73787992519f96b4c12c16204f6eff86ae0276e72fdac459070. May 17 00:14:30.057106 systemd[1]: Started cri-containerd-1f0a87cad062daf97619a9c7a7153b41b854a8f0b3cc4575fe75a9dd1b462dca.scope - libcontainer container 1f0a87cad062daf97619a9c7a7153b41b854a8f0b3cc4575fe75a9dd1b462dca. May 17 00:14:30.110628 containerd[1479]: time="2025-05-17T00:14:30.110374607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-16326e39d6,Uid:8e36d65b15ed0e86bce1f6c84936135f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2d0bf8ee677e73787992519f96b4c12c16204f6eff86ae0276e72fdac459070\"" May 17 00:14:30.117405 containerd[1479]: time="2025-05-17T00:14:30.117204658Z" level=info msg="CreateContainer within sandbox \"b2d0bf8ee677e73787992519f96b4c12c16204f6eff86ae0276e72fdac459070\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:14:30.127220 containerd[1479]: time="2025-05-17T00:14:30.127032629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-16326e39d6,Uid:53c9d49ee42905013be76c27162a6b34,Namespace:kube-system,Attempt:0,} returns sandbox id \"af7595122e6ad4847c7c08cb14a38588feae02bce8ee8dc7071d2de2ea1c1ca0\"" May 17 00:14:30.129699 containerd[1479]: time="2025-05-17T00:14:30.129605223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-16326e39d6,Uid:63ca978206f012db5d01e3627ba7053b,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f0a87cad062daf97619a9c7a7153b41b854a8f0b3cc4575fe75a9dd1b462dca\"" May 17 00:14:30.132466 containerd[1479]: time="2025-05-17T00:14:30.131682771Z" level=info msg="CreateContainer within sandbox \"af7595122e6ad4847c7c08cb14a38588feae02bce8ee8dc7071d2de2ea1c1ca0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:14:30.135464 containerd[1479]: time="2025-05-17T00:14:30.135392220Z" level=info msg="CreateContainer within sandbox \"1f0a87cad062daf97619a9c7a7153b41b854a8f0b3cc4575fe75a9dd1b462dca\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:14:30.138274 containerd[1479]: time="2025-05-17T00:14:30.138229058Z" level=info msg="CreateContainer within sandbox \"b2d0bf8ee677e73787992519f96b4c12c16204f6eff86ae0276e72fdac459070\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"05dee176135b8342215bbefed30800dd91f79b647119a988adb5a7af21ad0538\"" May 17 00:14:30.139741 containerd[1479]: time="2025-05-17T00:14:30.139701478Z" level=info msg="StartContainer for \"05dee176135b8342215bbefed30800dd91f79b647119a988adb5a7af21ad0538\"" May 17 00:14:30.162600 containerd[1479]: time="2025-05-17T00:14:30.162536742Z" level=info msg="CreateContainer within sandbox \"1f0a87cad062daf97619a9c7a7153b41b854a8f0b3cc4575fe75a9dd1b462dca\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"21bbf95314b408436910bfe6087858da33f048ae783cdd0397da28369cd07d7e\"" May 17 00:14:30.163363 containerd[1479]: time="2025-05-17T00:14:30.163227031Z" level=info msg="StartContainer for \"21bbf95314b408436910bfe6087858da33f048ae783cdd0397da28369cd07d7e\"" May 17 00:14:30.166178 containerd[1479]: time="2025-05-17T00:14:30.166049869Z" level=info msg="CreateContainer within sandbox \"af7595122e6ad4847c7c08cb14a38588feae02bce8ee8dc7071d2de2ea1c1ca0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"eb001af5b5a773f98b6be4562590f0e90a8f3f929cab5b016adaac4d2b39a607\"" May 17 00:14:30.167673 containerd[1479]: time="2025-05-17T00:14:30.167567209Z" level=info msg="StartContainer for \"eb001af5b5a773f98b6be4562590f0e90a8f3f929cab5b016adaac4d2b39a607\"" May 17 00:14:30.176658 systemd[1]: Started cri-containerd-05dee176135b8342215bbefed30800dd91f79b647119a988adb5a7af21ad0538.scope - libcontainer container 05dee176135b8342215bbefed30800dd91f79b647119a988adb5a7af21ad0538. May 17 00:14:30.214633 systemd[1]: Started cri-containerd-21bbf95314b408436910bfe6087858da33f048ae783cdd0397da28369cd07d7e.scope - libcontainer container 21bbf95314b408436910bfe6087858da33f048ae783cdd0397da28369cd07d7e. May 17 00:14:30.215838 systemd[1]: Started cri-containerd-eb001af5b5a773f98b6be4562590f0e90a8f3f929cab5b016adaac4d2b39a607.scope - libcontainer container eb001af5b5a773f98b6be4562590f0e90a8f3f929cab5b016adaac4d2b39a607. May 17 00:14:30.243781 containerd[1479]: time="2025-05-17T00:14:30.242678809Z" level=info msg="StartContainer for \"05dee176135b8342215bbefed30800dd91f79b647119a988adb5a7af21ad0538\" returns successfully" May 17 00:14:30.249130 kubelet[2311]: W0517 00:14:30.249062 2311 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://142.132.181.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-16326e39d6&limit=500&resourceVersion=0": dial tcp 142.132.181.146:6443: connect: connection refused May 17 00:14:30.249305 kubelet[2311]: E0517 00:14:30.249139 2311 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://142.132.181.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-16326e39d6&limit=500&resourceVersion=0\": dial tcp 142.132.181.146:6443: connect: connection refused" logger="UnhandledError" May 17 00:14:30.292622 kubelet[2311]: E0517 00:14:30.291538 2311 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://142.132.181.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-16326e39d6?timeout=10s\": dial tcp 142.132.181.146:6443: connect: connection refused" interval="1.6s" May 17 00:14:30.295461 containerd[1479]: time="2025-05-17T00:14:30.294818144Z" level=info msg="StartContainer for \"eb001af5b5a773f98b6be4562590f0e90a8f3f929cab5b016adaac4d2b39a607\" returns successfully" May 17 00:14:30.295461 containerd[1479]: time="2025-05-17T00:14:30.294832984Z" level=info msg="StartContainer for \"21bbf95314b408436910bfe6087858da33f048ae783cdd0397da28369cd07d7e\" returns successfully" May 17 00:14:30.311329 kubelet[2311]: W0517 00:14:30.311230 2311 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://142.132.181.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 142.132.181.146:6443: connect: connection refused May 17 00:14:30.311329 kubelet[2311]: E0517 00:14:30.311280 2311 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://142.132.181.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 142.132.181.146:6443: connect: connection refused" logger="UnhandledError" May 17 00:14:30.331325 kubelet[2311]: W0517 00:14:30.331183 2311 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://142.132.181.146:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 142.132.181.146:6443: connect: connection refused May 17 00:14:30.331325 kubelet[2311]: E0517 00:14:30.331282 2311 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://142.132.181.146:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 142.132.181.146:6443: connect: connection refused" logger="UnhandledError" May 17 00:14:30.483525 kubelet[2311]: I0517 00:14:30.483465 2311 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-16326e39d6" May 17 00:14:30.937288 kubelet[2311]: E0517 00:14:30.937243 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-16326e39d6\" not found" node="ci-4081-3-3-n-16326e39d6" May 17 00:14:30.938843 kubelet[2311]: E0517 00:14:30.937600 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-16326e39d6\" not found" node="ci-4081-3-3-n-16326e39d6" May 17 00:14:30.941176 kubelet[2311]: E0517 00:14:30.940917 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-16326e39d6\" not found" node="ci-4081-3-3-n-16326e39d6" May 17 00:14:31.942632 kubelet[2311]: E0517 00:14:31.942591 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-16326e39d6\" not found" node="ci-4081-3-3-n-16326e39d6" May 17 00:14:31.942984 kubelet[2311]: E0517 00:14:31.942954 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-16326e39d6\" not found" node="ci-4081-3-3-n-16326e39d6" May 17 00:14:32.944584 kubelet[2311]: E0517 00:14:32.944546 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-16326e39d6\" not found" node="ci-4081-3-3-n-16326e39d6" May 17 00:14:33.005044 kubelet[2311]: E0517 00:14:33.004982 2311 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-3-n-16326e39d6\" not found" node="ci-4081-3-3-n-16326e39d6" May 17 00:14:33.020211 kubelet[2311]: I0517 00:14:33.020158 2311 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-3-n-16326e39d6" May 17 00:14:33.020211 kubelet[2311]: E0517 00:14:33.020208 2311 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-3-n-16326e39d6\": node \"ci-4081-3-3-n-16326e39d6\" not found" May 17 00:14:33.047355 kubelet[2311]: E0517 00:14:33.047309 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-16326e39d6\" not found" May 17 00:14:33.183957 kubelet[2311]: I0517 00:14:33.183651 2311 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-3-n-16326e39d6" May 17 00:14:33.209472 kubelet[2311]: E0517 00:14:33.209247 2311 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-3-n-16326e39d6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-3-n-16326e39d6" May 17 00:14:33.209472 kubelet[2311]: I0517 00:14:33.209369 2311 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-16326e39d6" May 17 00:14:33.218622 kubelet[2311]: E0517 00:14:33.218263 2311 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-3-n-16326e39d6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-16326e39d6" May 17 00:14:33.218622 kubelet[2311]: I0517 00:14:33.218303 2311 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-3-n-16326e39d6" May 17 00:14:33.226515 kubelet[2311]: E0517 00:14:33.226420 2311 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-3-n-16326e39d6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-3-n-16326e39d6" May 17 00:14:33.863861 kubelet[2311]: I0517 00:14:33.863525 2311 apiserver.go:52] "Watching apiserver" May 17 00:14:33.884892 kubelet[2311]: I0517 00:14:33.884775 2311 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:14:35.098731 systemd[1]: Reloading requested from client PID 2585 ('systemctl') (unit session-7.scope)... May 17 00:14:35.099072 systemd[1]: Reloading... May 17 00:14:35.185792 zram_generator::config[2625]: No configuration found. May 17 00:14:35.299390 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:14:35.385091 systemd[1]: Reloading finished in 285 ms. May 17 00:14:35.432263 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:14:35.433131 kubelet[2311]: I0517 00:14:35.432574 2311 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:14:35.450689 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:14:35.451354 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:14:35.451475 systemd[1]: kubelet.service: Consumed 2.174s CPU time, 133.6M memory peak, 0B memory swap peak. May 17 00:14:35.461916 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:14:35.618101 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:14:35.619605 (kubelet)[2670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:14:35.672634 kubelet[2670]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:14:35.672634 kubelet[2670]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:14:35.672634 kubelet[2670]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:14:35.672634 kubelet[2670]: I0517 00:14:35.664683 2670 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:14:35.679059 kubelet[2670]: I0517 00:14:35.678783 2670 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 00:14:35.679291 kubelet[2670]: I0517 00:14:35.679274 2670 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:14:35.680039 kubelet[2670]: I0517 00:14:35.680012 2670 server.go:954] "Client rotation is on, will bootstrap in background" May 17 00:14:35.682965 kubelet[2670]: I0517 00:14:35.682940 2670 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:14:35.688609 kubelet[2670]: I0517 00:14:35.688568 2670 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:14:35.696920 kubelet[2670]: E0517 00:14:35.696880 2670 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:14:35.696920 kubelet[2670]: I0517 00:14:35.696961 2670 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:14:35.701229 kubelet[2670]: I0517 00:14:35.701108 2670 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:14:35.701629 kubelet[2670]: I0517 00:14:35.701588 2670 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:14:35.702080 kubelet[2670]: I0517 00:14:35.701736 2670 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-16326e39d6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:14:35.702080 kubelet[2670]: I0517 00:14:35.701946 2670 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:14:35.702080 kubelet[2670]: I0517 00:14:35.701956 2670 container_manager_linux.go:304] "Creating device plugin manager" May 17 00:14:35.702356 kubelet[2670]: I0517 00:14:35.702308 2670 state_mem.go:36] "Initialized new in-memory state store" May 17 00:14:35.702666 kubelet[2670]: I0517 00:14:35.702621 2670 kubelet.go:446] "Attempting to sync node with API server" May 17 00:14:35.702666 kubelet[2670]: I0517 00:14:35.702639 2670 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:14:35.707606 kubelet[2670]: I0517 00:14:35.704080 2670 kubelet.go:352] "Adding apiserver pod source" May 17 00:14:35.707606 kubelet[2670]: I0517 00:14:35.704114 2670 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:14:35.708443 kubelet[2670]: I0517 00:14:35.708246 2670 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:14:35.709006 kubelet[2670]: I0517 00:14:35.708796 2670 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:14:35.710574 kubelet[2670]: I0517 00:14:35.709262 2670 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:14:35.710574 kubelet[2670]: I0517 00:14:35.709299 2670 server.go:1287] "Started kubelet" May 17 00:14:35.717583 kubelet[2670]: I0517 00:14:35.715998 2670 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:14:35.731143 kubelet[2670]: I0517 00:14:35.730202 2670 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:14:35.735490 kubelet[2670]: I0517 00:14:35.733960 2670 server.go:479] "Adding debug handlers to kubelet server" May 17 00:14:35.735490 kubelet[2670]: I0517 00:14:35.735016 2670 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:14:35.735490 kubelet[2670]: I0517 00:14:35.735225 2670 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:14:35.735837 kubelet[2670]: I0517 00:14:35.735815 2670 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:14:35.738968 kubelet[2670]: I0517 00:14:35.738927 2670 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:14:35.739085 kubelet[2670]: E0517 00:14:35.739050 2670 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-16326e39d6\" not found" May 17 00:14:35.739302 kubelet[2670]: I0517 00:14:35.739275 2670 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:14:35.739441 kubelet[2670]: I0517 00:14:35.739404 2670 reconciler.go:26] "Reconciler: start to sync state" May 17 00:14:35.750603 kubelet[2670]: I0517 00:14:35.750538 2670 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:14:35.752588 kubelet[2670]: I0517 00:14:35.752554 2670 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:14:35.752753 kubelet[2670]: I0517 00:14:35.752742 2670 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 00:14:35.752820 kubelet[2670]: I0517 00:14:35.752812 2670 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:14:35.752868 kubelet[2670]: I0517 00:14:35.752861 2670 kubelet.go:2382] "Starting kubelet main sync loop" May 17 00:14:35.752965 kubelet[2670]: E0517 00:14:35.752946 2670 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:14:35.761818 kubelet[2670]: I0517 00:14:35.761780 2670 factory.go:221] Registration of the systemd container factory successfully May 17 00:14:35.762178 kubelet[2670]: I0517 00:14:35.761886 2670 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:14:35.770661 kubelet[2670]: I0517 00:14:35.770617 2670 factory.go:221] Registration of the containerd container factory successfully May 17 00:14:35.787526 kubelet[2670]: E0517 00:14:35.787271 2670 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:14:35.828699 kubelet[2670]: I0517 00:14:35.828651 2670 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:14:35.828699 kubelet[2670]: I0517 00:14:35.828675 2670 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:14:35.828699 kubelet[2670]: I0517 00:14:35.828700 2670 state_mem.go:36] "Initialized new in-memory state store" May 17 00:14:35.828904 kubelet[2670]: I0517 00:14:35.828886 2670 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:14:35.828935 kubelet[2670]: I0517 00:14:35.828904 2670 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:14:35.828935 kubelet[2670]: I0517 00:14:35.828926 2670 policy_none.go:49] "None policy: Start" May 17 00:14:35.828977 kubelet[2670]: I0517 00:14:35.828936 2670 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:14:35.828977 kubelet[2670]: I0517 00:14:35.828946 2670 state_mem.go:35] "Initializing new in-memory state store" May 17 00:14:35.829093 kubelet[2670]: I0517 00:14:35.829083 2670 state_mem.go:75] "Updated machine memory state" May 17 00:14:35.833794 kubelet[2670]: I0517 00:14:35.833765 2670 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:14:35.833973 kubelet[2670]: I0517 00:14:35.833959 2670 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:14:35.834021 kubelet[2670]: I0517 00:14:35.833975 2670 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:14:35.834772 kubelet[2670]: I0517 00:14:35.834718 2670 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:14:35.837771 kubelet[2670]: E0517 00:14:35.837743 2670 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:14:35.856463 kubelet[2670]: I0517 00:14:35.853861 2670 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-16326e39d6" May 17 00:14:35.856463 kubelet[2670]: I0517 00:14:35.854339 2670 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-3-n-16326e39d6" May 17 00:14:35.856463 kubelet[2670]: I0517 00:14:35.854702 2670 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-3-n-16326e39d6" May 17 00:14:35.940533 kubelet[2670]: I0517 00:14:35.939849 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/53c9d49ee42905013be76c27162a6b34-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-16326e39d6\" (UID: \"53c9d49ee42905013be76c27162a6b34\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-16326e39d6" May 17 00:14:35.940533 kubelet[2670]: I0517 00:14:35.939896 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/63ca978206f012db5d01e3627ba7053b-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-16326e39d6\" (UID: \"63ca978206f012db5d01e3627ba7053b\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-16326e39d6" May 17 00:14:35.940533 kubelet[2670]: I0517 00:14:35.939918 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8e36d65b15ed0e86bce1f6c84936135f-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-16326e39d6\" (UID: \"8e36d65b15ed0e86bce1f6c84936135f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-16326e39d6" May 17 00:14:35.940533 kubelet[2670]: I0517 00:14:35.939936 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8e36d65b15ed0e86bce1f6c84936135f-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-16326e39d6\" (UID: \"8e36d65b15ed0e86bce1f6c84936135f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-16326e39d6" May 17 00:14:35.940533 kubelet[2670]: I0517 00:14:35.939952 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8e36d65b15ed0e86bce1f6c84936135f-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-16326e39d6\" (UID: \"8e36d65b15ed0e86bce1f6c84936135f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-16326e39d6" May 17 00:14:35.940772 kubelet[2670]: I0517 00:14:35.939969 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/63ca978206f012db5d01e3627ba7053b-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-16326e39d6\" (UID: \"63ca978206f012db5d01e3627ba7053b\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-16326e39d6" May 17 00:14:35.940772 kubelet[2670]: I0517 00:14:35.939984 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/63ca978206f012db5d01e3627ba7053b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-16326e39d6\" (UID: \"63ca978206f012db5d01e3627ba7053b\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-16326e39d6" May 17 00:14:35.940772 kubelet[2670]: I0517 00:14:35.940000 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8e36d65b15ed0e86bce1f6c84936135f-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-16326e39d6\" (UID: \"8e36d65b15ed0e86bce1f6c84936135f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-16326e39d6" May 17 00:14:35.940772 kubelet[2670]: I0517 00:14:35.940015 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8e36d65b15ed0e86bce1f6c84936135f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-16326e39d6\" (UID: \"8e36d65b15ed0e86bce1f6c84936135f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-16326e39d6" May 17 00:14:35.942359 kubelet[2670]: I0517 00:14:35.941983 2670 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-16326e39d6" May 17 00:14:35.953833 kubelet[2670]: I0517 00:14:35.953788 2670 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-3-n-16326e39d6" May 17 00:14:35.953998 kubelet[2670]: I0517 00:14:35.953897 2670 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-3-n-16326e39d6" May 17 00:14:36.706005 kubelet[2670]: I0517 00:14:36.705839 2670 apiserver.go:52] "Watching apiserver" May 17 00:14:36.739490 kubelet[2670]: I0517 00:14:36.739388 2670 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:14:36.808071 kubelet[2670]: I0517 00:14:36.807193 2670 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-3-n-16326e39d6" May 17 00:14:36.823516 kubelet[2670]: E0517 00:14:36.823232 2670 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-3-n-16326e39d6\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-3-n-16326e39d6" May 17 00:14:36.854335 kubelet[2670]: I0517 00:14:36.854251 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-16326e39d6" podStartSLOduration=1.8542299 podStartE2EDuration="1.8542299s" podCreationTimestamp="2025-05-17 00:14:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:14:36.839566094 +0000 UTC m=+1.215793184" watchObservedRunningTime="2025-05-17 00:14:36.8542299 +0000 UTC m=+1.230456950" May 17 00:14:36.871659 kubelet[2670]: I0517 00:14:36.869819 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-3-n-16326e39d6" podStartSLOduration=1.8697999570000001 podStartE2EDuration="1.869799957s" podCreationTimestamp="2025-05-17 00:14:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:14:36.856660568 +0000 UTC m=+1.232887618" watchObservedRunningTime="2025-05-17 00:14:36.869799957 +0000 UTC m=+1.246027007" May 17 00:14:36.885777 kubelet[2670]: I0517 00:14:36.885703 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-3-n-16326e39d6" podStartSLOduration=1.8856816969999999 podStartE2EDuration="1.885681697s" podCreationTimestamp="2025-05-17 00:14:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:14:36.87010332 +0000 UTC m=+1.246330370" watchObservedRunningTime="2025-05-17 00:14:36.885681697 +0000 UTC m=+1.261908747" May 17 00:14:40.264853 kubelet[2670]: I0517 00:14:40.264780 2670 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:14:40.265355 containerd[1479]: time="2025-05-17T00:14:40.265295992Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:14:40.265694 kubelet[2670]: I0517 00:14:40.265665 2670 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:14:40.888121 systemd[1]: Created slice kubepods-besteffort-podda3386b6_329d_4f16_89ff_27fb1a727be1.slice - libcontainer container kubepods-besteffort-podda3386b6_329d_4f16_89ff_27fb1a727be1.slice. May 17 00:14:40.974617 kubelet[2670]: I0517 00:14:40.974574 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/da3386b6-329d-4f16-89ff-27fb1a727be1-kube-proxy\") pod \"kube-proxy-qph5r\" (UID: \"da3386b6-329d-4f16-89ff-27fb1a727be1\") " pod="kube-system/kube-proxy-qph5r" May 17 00:14:40.974795 kubelet[2670]: I0517 00:14:40.974779 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da3386b6-329d-4f16-89ff-27fb1a727be1-xtables-lock\") pod \"kube-proxy-qph5r\" (UID: \"da3386b6-329d-4f16-89ff-27fb1a727be1\") " pod="kube-system/kube-proxy-qph5r" May 17 00:14:40.974993 kubelet[2670]: I0517 00:14:40.974972 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da3386b6-329d-4f16-89ff-27fb1a727be1-lib-modules\") pod \"kube-proxy-qph5r\" (UID: \"da3386b6-329d-4f16-89ff-27fb1a727be1\") " pod="kube-system/kube-proxy-qph5r" May 17 00:14:40.975230 kubelet[2670]: I0517 00:14:40.975209 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqgwj\" (UniqueName: \"kubernetes.io/projected/da3386b6-329d-4f16-89ff-27fb1a727be1-kube-api-access-fqgwj\") pod \"kube-proxy-qph5r\" (UID: \"da3386b6-329d-4f16-89ff-27fb1a727be1\") " pod="kube-system/kube-proxy-qph5r" May 17 00:14:41.198570 containerd[1479]: time="2025-05-17T00:14:41.197941068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qph5r,Uid:da3386b6-329d-4f16-89ff-27fb1a727be1,Namespace:kube-system,Attempt:0,}" May 17 00:14:41.225236 containerd[1479]: time="2025-05-17T00:14:41.224572173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:41.225236 containerd[1479]: time="2025-05-17T00:14:41.224638453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:41.225236 containerd[1479]: time="2025-05-17T00:14:41.224654853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:41.225236 containerd[1479]: time="2025-05-17T00:14:41.224865536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:41.250647 systemd[1]: Started cri-containerd-6bef9621f24b25e2e856e67c7684672e212318b39aa1c4f15fd9274766d60a2b.scope - libcontainer container 6bef9621f24b25e2e856e67c7684672e212318b39aa1c4f15fd9274766d60a2b. May 17 00:14:41.301558 containerd[1479]: time="2025-05-17T00:14:41.301489498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qph5r,Uid:da3386b6-329d-4f16-89ff-27fb1a727be1,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bef9621f24b25e2e856e67c7684672e212318b39aa1c4f15fd9274766d60a2b\"" May 17 00:14:41.309766 containerd[1479]: time="2025-05-17T00:14:41.309713820Z" level=info msg="CreateContainer within sandbox \"6bef9621f24b25e2e856e67c7684672e212318b39aa1c4f15fd9274766d60a2b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:14:41.351608 systemd[1]: Created slice kubepods-besteffort-podc9a272d3_41e1_4076_8ab2_35b1c56cec52.slice - libcontainer container kubepods-besteffort-podc9a272d3_41e1_4076_8ab2_35b1c56cec52.slice. May 17 00:14:41.354222 containerd[1479]: time="2025-05-17T00:14:41.354062062Z" level=info msg="CreateContainer within sandbox \"6bef9621f24b25e2e856e67c7684672e212318b39aa1c4f15fd9274766d60a2b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"64e3c0bfb7e17ecec35aaf4bc67773dc80b8eddc8221829616e4f2a7f65b1ae4\"" May 17 00:14:41.357562 containerd[1479]: time="2025-05-17T00:14:41.356359765Z" level=info msg="StartContainer for \"64e3c0bfb7e17ecec35aaf4bc67773dc80b8eddc8221829616e4f2a7f65b1ae4\"" May 17 00:14:41.377924 kubelet[2670]: I0517 00:14:41.377699 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c9a272d3-41e1-4076-8ab2-35b1c56cec52-var-lib-calico\") pod \"tigera-operator-844669ff44-rw7sf\" (UID: \"c9a272d3-41e1-4076-8ab2-35b1c56cec52\") " pod="tigera-operator/tigera-operator-844669ff44-rw7sf" May 17 00:14:41.377924 kubelet[2670]: I0517 00:14:41.377859 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk572\" (UniqueName: \"kubernetes.io/projected/c9a272d3-41e1-4076-8ab2-35b1c56cec52-kube-api-access-lk572\") pod \"tigera-operator-844669ff44-rw7sf\" (UID: \"c9a272d3-41e1-4076-8ab2-35b1c56cec52\") " pod="tigera-operator/tigera-operator-844669ff44-rw7sf" May 17 00:14:41.405684 systemd[1]: Started cri-containerd-64e3c0bfb7e17ecec35aaf4bc67773dc80b8eddc8221829616e4f2a7f65b1ae4.scope - libcontainer container 64e3c0bfb7e17ecec35aaf4bc67773dc80b8eddc8221829616e4f2a7f65b1ae4. May 17 00:14:41.441723 containerd[1479]: time="2025-05-17T00:14:41.441445932Z" level=info msg="StartContainer for \"64e3c0bfb7e17ecec35aaf4bc67773dc80b8eddc8221829616e4f2a7f65b1ae4\" returns successfully" May 17 00:14:41.661272 containerd[1479]: time="2025-05-17T00:14:41.661166839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-rw7sf,Uid:c9a272d3-41e1-4076-8ab2-35b1c56cec52,Namespace:tigera-operator,Attempt:0,}" May 17 00:14:41.690842 containerd[1479]: time="2025-05-17T00:14:41.689822604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:41.690842 containerd[1479]: time="2025-05-17T00:14:41.689929805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:41.690842 containerd[1479]: time="2025-05-17T00:14:41.689985246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:41.690842 containerd[1479]: time="2025-05-17T00:14:41.690206288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:41.707892 systemd[1]: Started cri-containerd-e9a0773c4a312b9fa6e12b6887108d6c3da87e445951eb571dba78e90402b6ad.scope - libcontainer container e9a0773c4a312b9fa6e12b6887108d6c3da87e445951eb571dba78e90402b6ad. May 17 00:14:41.746420 containerd[1479]: time="2025-05-17T00:14:41.745831362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-rw7sf,Uid:c9a272d3-41e1-4076-8ab2-35b1c56cec52,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e9a0773c4a312b9fa6e12b6887108d6c3da87e445951eb571dba78e90402b6ad\"" May 17 00:14:41.749061 containerd[1479]: time="2025-05-17T00:14:41.748201025Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 17 00:14:43.417029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4161118555.mount: Deactivated successfully. May 17 00:14:43.808040 containerd[1479]: time="2025-05-17T00:14:43.807983278Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:43.810174 containerd[1479]: time="2025-05-17T00:14:43.810134938Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=22143480" May 17 00:14:43.811339 containerd[1479]: time="2025-05-17T00:14:43.811301389Z" level=info msg="ImageCreate event name:\"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:43.814305 containerd[1479]: time="2025-05-17T00:14:43.814266697Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:43.815350 containerd[1479]: time="2025-05-17T00:14:43.814961584Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"22139475\" in 2.066721159s" May 17 00:14:43.815946 containerd[1479]: time="2025-05-17T00:14:43.815475869Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\"" May 17 00:14:43.821125 containerd[1479]: time="2025-05-17T00:14:43.821092482Z" level=info msg="CreateContainer within sandbox \"e9a0773c4a312b9fa6e12b6887108d6c3da87e445951eb571dba78e90402b6ad\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 17 00:14:43.841236 containerd[1479]: time="2025-05-17T00:14:43.841195032Z" level=info msg="CreateContainer within sandbox \"e9a0773c4a312b9fa6e12b6887108d6c3da87e445951eb571dba78e90402b6ad\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d2e59e4b3942cd680320f14c3d1570bfb67d5f1a79750f66d3b54339daa25f4c\"" May 17 00:14:43.843503 containerd[1479]: time="2025-05-17T00:14:43.843468693Z" level=info msg="StartContainer for \"d2e59e4b3942cd680320f14c3d1570bfb67d5f1a79750f66d3b54339daa25f4c\"" May 17 00:14:43.868296 systemd[1]: Started cri-containerd-d2e59e4b3942cd680320f14c3d1570bfb67d5f1a79750f66d3b54339daa25f4c.scope - libcontainer container d2e59e4b3942cd680320f14c3d1570bfb67d5f1a79750f66d3b54339daa25f4c. May 17 00:14:43.898123 containerd[1479]: time="2025-05-17T00:14:43.897975889Z" level=info msg="StartContainer for \"d2e59e4b3942cd680320f14c3d1570bfb67d5f1a79750f66d3b54339daa25f4c\" returns successfully" May 17 00:14:44.428036 kubelet[2670]: I0517 00:14:44.427339 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qph5r" podStartSLOduration=4.427313116 podStartE2EDuration="4.427313116s" podCreationTimestamp="2025-05-17 00:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:14:41.854983168 +0000 UTC m=+6.231210218" watchObservedRunningTime="2025-05-17 00:14:44.427313116 +0000 UTC m=+8.803540166" May 17 00:14:47.405475 kubelet[2670]: I0517 00:14:47.405038 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-rw7sf" podStartSLOduration=4.335864489 podStartE2EDuration="6.405012551s" podCreationTimestamp="2025-05-17 00:14:41 +0000 UTC" firstStartedPulling="2025-05-17 00:14:41.7477145 +0000 UTC m=+6.123941550" lastFinishedPulling="2025-05-17 00:14:43.816862522 +0000 UTC m=+8.193089612" observedRunningTime="2025-05-17 00:14:44.869474434 +0000 UTC m=+9.245701564" watchObservedRunningTime="2025-05-17 00:14:47.405012551 +0000 UTC m=+11.781239601" May 17 00:14:50.144599 sudo[1833]: pam_unix(sudo:session): session closed for user root May 17 00:14:50.305705 sshd[1830]: pam_unix(sshd:session): session closed for user core May 17 00:14:50.309354 systemd-logind[1457]: Session 7 logged out. Waiting for processes to exit. May 17 00:14:50.310645 systemd[1]: sshd@6-142.132.181.146:22-139.178.68.195:49458.service: Deactivated successfully. May 17 00:14:50.314868 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:14:50.315708 systemd[1]: session-7.scope: Consumed 7.040s CPU time, 151.1M memory peak, 0B memory swap peak. May 17 00:14:50.319168 systemd-logind[1457]: Removed session 7. May 17 00:14:58.189030 systemd[1]: Created slice kubepods-besteffort-pod660b38c5_8a13_4a5f_879c_78f7cdb4a539.slice - libcontainer container kubepods-besteffort-pod660b38c5_8a13_4a5f_879c_78f7cdb4a539.slice. May 17 00:14:58.196319 kubelet[2670]: I0517 00:14:58.196279 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqnkd\" (UniqueName: \"kubernetes.io/projected/660b38c5-8a13-4a5f-879c-78f7cdb4a539-kube-api-access-pqnkd\") pod \"calico-typha-ff4f9479b-sj7q5\" (UID: \"660b38c5-8a13-4a5f-879c-78f7cdb4a539\") " pod="calico-system/calico-typha-ff4f9479b-sj7q5" May 17 00:14:58.196747 kubelet[2670]: I0517 00:14:58.196726 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/660b38c5-8a13-4a5f-879c-78f7cdb4a539-tigera-ca-bundle\") pod \"calico-typha-ff4f9479b-sj7q5\" (UID: \"660b38c5-8a13-4a5f-879c-78f7cdb4a539\") " pod="calico-system/calico-typha-ff4f9479b-sj7q5" May 17 00:14:58.196820 kubelet[2670]: I0517 00:14:58.196807 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/660b38c5-8a13-4a5f-879c-78f7cdb4a539-typha-certs\") pod \"calico-typha-ff4f9479b-sj7q5\" (UID: \"660b38c5-8a13-4a5f-879c-78f7cdb4a539\") " pod="calico-system/calico-typha-ff4f9479b-sj7q5" May 17 00:14:58.384587 systemd[1]: Created slice kubepods-besteffort-pod543508c5_ffbb_4f06_b840_30a83bf61621.slice - libcontainer container kubepods-besteffort-pod543508c5_ffbb_4f06_b840_30a83bf61621.slice. May 17 00:14:58.398536 kubelet[2670]: I0517 00:14:58.397960 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/543508c5-ffbb-4f06-b840-30a83bf61621-node-certs\") pod \"calico-node-42drg\" (UID: \"543508c5-ffbb-4f06-b840-30a83bf61621\") " pod="calico-system/calico-node-42drg" May 17 00:14:58.398536 kubelet[2670]: I0517 00:14:58.398010 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/543508c5-ffbb-4f06-b840-30a83bf61621-lib-modules\") pod \"calico-node-42drg\" (UID: \"543508c5-ffbb-4f06-b840-30a83bf61621\") " pod="calico-system/calico-node-42drg" May 17 00:14:58.398536 kubelet[2670]: I0517 00:14:58.398031 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/543508c5-ffbb-4f06-b840-30a83bf61621-cni-log-dir\") pod \"calico-node-42drg\" (UID: \"543508c5-ffbb-4f06-b840-30a83bf61621\") " pod="calico-system/calico-node-42drg" May 17 00:14:58.398536 kubelet[2670]: I0517 00:14:58.398047 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/543508c5-ffbb-4f06-b840-30a83bf61621-var-lib-calico\") pod \"calico-node-42drg\" (UID: \"543508c5-ffbb-4f06-b840-30a83bf61621\") " pod="calico-system/calico-node-42drg" May 17 00:14:58.398536 kubelet[2670]: I0517 00:14:58.398064 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2l5v\" (UniqueName: \"kubernetes.io/projected/543508c5-ffbb-4f06-b840-30a83bf61621-kube-api-access-v2l5v\") pod \"calico-node-42drg\" (UID: \"543508c5-ffbb-4f06-b840-30a83bf61621\") " pod="calico-system/calico-node-42drg" May 17 00:14:58.398773 kubelet[2670]: I0517 00:14:58.398084 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/543508c5-ffbb-4f06-b840-30a83bf61621-flexvol-driver-host\") pod \"calico-node-42drg\" (UID: \"543508c5-ffbb-4f06-b840-30a83bf61621\") " pod="calico-system/calico-node-42drg" May 17 00:14:58.398773 kubelet[2670]: I0517 00:14:58.398098 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/543508c5-ffbb-4f06-b840-30a83bf61621-policysync\") pod \"calico-node-42drg\" (UID: \"543508c5-ffbb-4f06-b840-30a83bf61621\") " pod="calico-system/calico-node-42drg" May 17 00:14:58.398773 kubelet[2670]: I0517 00:14:58.398114 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/543508c5-ffbb-4f06-b840-30a83bf61621-cni-bin-dir\") pod \"calico-node-42drg\" (UID: \"543508c5-ffbb-4f06-b840-30a83bf61621\") " pod="calico-system/calico-node-42drg" May 17 00:14:58.398773 kubelet[2670]: I0517 00:14:58.398129 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/543508c5-ffbb-4f06-b840-30a83bf61621-var-run-calico\") pod \"calico-node-42drg\" (UID: \"543508c5-ffbb-4f06-b840-30a83bf61621\") " pod="calico-system/calico-node-42drg" May 17 00:14:58.398773 kubelet[2670]: I0517 00:14:58.398143 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/543508c5-ffbb-4f06-b840-30a83bf61621-tigera-ca-bundle\") pod \"calico-node-42drg\" (UID: \"543508c5-ffbb-4f06-b840-30a83bf61621\") " pod="calico-system/calico-node-42drg" May 17 00:14:58.398893 kubelet[2670]: I0517 00:14:58.398162 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/543508c5-ffbb-4f06-b840-30a83bf61621-xtables-lock\") pod \"calico-node-42drg\" (UID: \"543508c5-ffbb-4f06-b840-30a83bf61621\") " pod="calico-system/calico-node-42drg" May 17 00:14:58.398893 kubelet[2670]: I0517 00:14:58.398180 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/543508c5-ffbb-4f06-b840-30a83bf61621-cni-net-dir\") pod \"calico-node-42drg\" (UID: \"543508c5-ffbb-4f06-b840-30a83bf61621\") " pod="calico-system/calico-node-42drg" May 17 00:14:58.493945 containerd[1479]: time="2025-05-17T00:14:58.493562441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-ff4f9479b-sj7q5,Uid:660b38c5-8a13-4a5f-879c-78f7cdb4a539,Namespace:calico-system,Attempt:0,}" May 17 00:14:58.502151 kubelet[2670]: E0517 00:14:58.502100 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.502151 kubelet[2670]: W0517 00:14:58.502140 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.504061 kubelet[2670]: E0517 00:14:58.502173 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.504061 kubelet[2670]: E0517 00:14:58.502447 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.504061 kubelet[2670]: W0517 00:14:58.502458 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.504061 kubelet[2670]: E0517 00:14:58.502471 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.504061 kubelet[2670]: E0517 00:14:58.502916 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.504061 kubelet[2670]: W0517 00:14:58.502930 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.504061 kubelet[2670]: E0517 00:14:58.502945 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.504250 kubelet[2670]: E0517 00:14:58.504066 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.504250 kubelet[2670]: W0517 00:14:58.504083 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.504250 kubelet[2670]: E0517 00:14:58.504100 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.504374 kubelet[2670]: E0517 00:14:58.504342 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.504374 kubelet[2670]: W0517 00:14:58.504359 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.504374 kubelet[2670]: E0517 00:14:58.504372 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.504650 kubelet[2670]: E0517 00:14:58.504631 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.504650 kubelet[2670]: W0517 00:14:58.504645 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.504962 kubelet[2670]: E0517 00:14:58.504800 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.504962 kubelet[2670]: W0517 00:14:58.504812 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.504962 kubelet[2670]: E0517 00:14:58.504948 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.504962 kubelet[2670]: W0517 00:14:58.504954 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.505086 kubelet[2670]: E0517 00:14:58.505071 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.505086 kubelet[2670]: W0517 00:14:58.505077 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.505201 kubelet[2670]: E0517 00:14:58.505177 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.505283 kubelet[2670]: E0517 00:14:58.505212 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.505379 kubelet[2670]: W0517 00:14:58.505363 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.505542 kubelet[2670]: E0517 00:14:58.505457 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.509658 kubelet[2670]: E0517 00:14:58.505236 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.511993 kubelet[2670]: E0517 00:14:58.505221 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.511993 kubelet[2670]: E0517 00:14:58.505231 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.512340 kubelet[2670]: E0517 00:14:58.512321 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.513771 kubelet[2670]: W0517 00:14:58.513743 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.516519 kubelet[2670]: E0517 00:14:58.515617 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.516519 kubelet[2670]: E0517 00:14:58.515962 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.516519 kubelet[2670]: W0517 00:14:58.515973 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.516519 kubelet[2670]: E0517 00:14:58.516058 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.516519 kubelet[2670]: E0517 00:14:58.516180 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.516519 kubelet[2670]: W0517 00:14:58.516188 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.516519 kubelet[2670]: E0517 00:14:58.516291 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.516519 kubelet[2670]: E0517 00:14:58.516394 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.516519 kubelet[2670]: W0517 00:14:58.516400 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.516826 kubelet[2670]: E0517 00:14:58.516774 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.517253 kubelet[2670]: E0517 00:14:58.517037 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.517253 kubelet[2670]: W0517 00:14:58.517048 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.517253 kubelet[2670]: E0517 00:14:58.517134 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.517253 kubelet[2670]: E0517 00:14:58.517232 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.517253 kubelet[2670]: W0517 00:14:58.517239 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.517549 kubelet[2670]: E0517 00:14:58.517492 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.518545 kubelet[2670]: E0517 00:14:58.518274 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.518545 kubelet[2670]: W0517 00:14:58.518290 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.518545 kubelet[2670]: E0517 00:14:58.518378 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.518888 kubelet[2670]: E0517 00:14:58.518739 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.518888 kubelet[2670]: W0517 00:14:58.518750 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.519250 kubelet[2670]: E0517 00:14:58.519124 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.521104 kubelet[2670]: E0517 00:14:58.519863 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.521104 kubelet[2670]: W0517 00:14:58.519879 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.521375 kubelet[2670]: E0517 00:14:58.521239 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.521533 kubelet[2670]: E0517 00:14:58.521518 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.521615 kubelet[2670]: W0517 00:14:58.521591 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.521890 kubelet[2670]: E0517 00:14:58.521774 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.522014 kubelet[2670]: E0517 00:14:58.522003 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.522139 kubelet[2670]: W0517 00:14:58.522125 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.522466 kubelet[2670]: E0517 00:14:58.522411 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.524163 kubelet[2670]: E0517 00:14:58.524026 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.524163 kubelet[2670]: W0517 00:14:58.524046 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.526589 kubelet[2670]: E0517 00:14:58.526360 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.526589 kubelet[2670]: W0517 00:14:58.526378 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.526589 kubelet[2670]: E0517 00:14:58.526394 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.526589 kubelet[2670]: E0517 00:14:58.526438 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.536608 kubelet[2670]: E0517 00:14:58.536521 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.536608 kubelet[2670]: W0517 00:14:58.536546 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.536608 kubelet[2670]: E0517 00:14:58.536567 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.547453 containerd[1479]: time="2025-05-17T00:14:58.546577950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:58.547453 containerd[1479]: time="2025-05-17T00:14:58.546641551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:58.547453 containerd[1479]: time="2025-05-17T00:14:58.546652271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:58.547453 containerd[1479]: time="2025-05-17T00:14:58.546745192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:58.567993 systemd[1]: Started cri-containerd-e2b22d3948163d1cddb9e58affc8bf020a923c4899509bd97012e4d2ab43745d.scope - libcontainer container e2b22d3948163d1cddb9e58affc8bf020a923c4899509bd97012e4d2ab43745d. May 17 00:14:58.646696 kubelet[2670]: E0517 00:14:58.646094 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fhb48" podUID="bf41328e-d1ed-475d-9a4a-c70bc9451b6f" May 17 00:14:58.683440 containerd[1479]: time="2025-05-17T00:14:58.683229292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-ff4f9479b-sj7q5,Uid:660b38c5-8a13-4a5f-879c-78f7cdb4a539,Namespace:calico-system,Attempt:0,} returns sandbox id \"e2b22d3948163d1cddb9e58affc8bf020a923c4899509bd97012e4d2ab43745d\"" May 17 00:14:58.689112 containerd[1479]: time="2025-05-17T00:14:58.688466606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 17 00:14:58.690196 kubelet[2670]: E0517 00:14:58.690091 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.690196 kubelet[2670]: W0517 00:14:58.690111 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.690196 kubelet[2670]: E0517 00:14:58.690131 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.691670 kubelet[2670]: E0517 00:14:58.691646 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.692618 kubelet[2670]: W0517 00:14:58.692343 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.692618 kubelet[2670]: E0517 00:14:58.692409 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.693215 kubelet[2670]: E0517 00:14:58.693129 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.693215 kubelet[2670]: W0517 00:14:58.693145 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.693215 kubelet[2670]: E0517 00:14:58.693163 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.694293 kubelet[2670]: E0517 00:14:58.693846 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.694293 kubelet[2670]: W0517 00:14:58.693863 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.694293 kubelet[2670]: E0517 00:14:58.693879 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.694725 kubelet[2670]: E0517 00:14:58.694708 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.694812 kubelet[2670]: W0517 00:14:58.694800 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.695542 kubelet[2670]: E0517 00:14:58.695078 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.695974 kubelet[2670]: E0517 00:14:58.695860 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.695974 kubelet[2670]: W0517 00:14:58.695875 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.695974 kubelet[2670]: E0517 00:14:58.695889 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.696480 kubelet[2670]: E0517 00:14:58.696462 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.697027 kubelet[2670]: W0517 00:14:58.696588 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.697027 kubelet[2670]: E0517 00:14:58.696610 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.698551 kubelet[2670]: E0517 00:14:58.698529 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.698675 kubelet[2670]: W0517 00:14:58.698641 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.698984 kubelet[2670]: E0517 00:14:58.698662 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.699911 kubelet[2670]: E0517 00:14:58.699480 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.699911 kubelet[2670]: W0517 00:14:58.699494 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.699911 kubelet[2670]: E0517 00:14:58.699551 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.700937 kubelet[2670]: E0517 00:14:58.700709 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.700937 kubelet[2670]: W0517 00:14:58.700722 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.700937 kubelet[2670]: E0517 00:14:58.700735 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.702876 kubelet[2670]: E0517 00:14:58.702654 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.702876 kubelet[2670]: W0517 00:14:58.702667 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.702876 kubelet[2670]: E0517 00:14:58.702680 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.702945 containerd[1479]: time="2025-05-17T00:14:58.701151170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-42drg,Uid:543508c5-ffbb-4f06-b840-30a83bf61621,Namespace:calico-system,Attempt:0,}" May 17 00:14:58.703600 kubelet[2670]: E0517 00:14:58.703532 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.703600 kubelet[2670]: W0517 00:14:58.703546 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.703600 kubelet[2670]: E0517 00:14:58.703559 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.704179 kubelet[2670]: E0517 00:14:58.704164 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.704383 kubelet[2670]: W0517 00:14:58.704249 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.704383 kubelet[2670]: E0517 00:14:58.704271 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.704383 kubelet[2670]: I0517 00:14:58.704295 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf41328e-d1ed-475d-9a4a-c70bc9451b6f-kubelet-dir\") pod \"csi-node-driver-fhb48\" (UID: \"bf41328e-d1ed-475d-9a4a-c70bc9451b6f\") " pod="calico-system/csi-node-driver-fhb48" May 17 00:14:58.704719 kubelet[2670]: E0517 00:14:58.704706 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.704867 kubelet[2670]: W0517 00:14:58.704771 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.704867 kubelet[2670]: E0517 00:14:58.704795 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.704867 kubelet[2670]: I0517 00:14:58.704815 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bf41328e-d1ed-475d-9a4a-c70bc9451b6f-socket-dir\") pod \"csi-node-driver-fhb48\" (UID: \"bf41328e-d1ed-475d-9a4a-c70bc9451b6f\") " pod="calico-system/csi-node-driver-fhb48" May 17 00:14:58.706576 kubelet[2670]: E0517 00:14:58.706310 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.706576 kubelet[2670]: W0517 00:14:58.706328 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.706576 kubelet[2670]: E0517 00:14:58.706353 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.706901 kubelet[2670]: E0517 00:14:58.706871 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.706901 kubelet[2670]: W0517 00:14:58.706886 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.707578 kubelet[2670]: E0517 00:14:58.707525 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.707945 kubelet[2670]: E0517 00:14:58.707828 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.707945 kubelet[2670]: W0517 00:14:58.707840 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.708053 kubelet[2670]: E0517 00:14:58.708031 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.708481 kubelet[2670]: E0517 00:14:58.708349 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.708481 kubelet[2670]: W0517 00:14:58.708361 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.708688 kubelet[2670]: E0517 00:14:58.708623 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.708688 kubelet[2670]: I0517 00:14:58.708665 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bf41328e-d1ed-475d-9a4a-c70bc9451b6f-registration-dir\") pod \"csi-node-driver-fhb48\" (UID: \"bf41328e-d1ed-475d-9a4a-c70bc9451b6f\") " pod="calico-system/csi-node-driver-fhb48" May 17 00:14:58.710164 kubelet[2670]: E0517 00:14:58.710065 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.710164 kubelet[2670]: W0517 00:14:58.710079 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.710909 kubelet[2670]: E0517 00:14:58.710567 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.710909 kubelet[2670]: W0517 00:14:58.710582 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.710909 kubelet[2670]: E0517 00:14:58.710872 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.710909 kubelet[2670]: E0517 00:14:58.710888 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.711218 kubelet[2670]: E0517 00:14:58.711206 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.711285 kubelet[2670]: W0517 00:14:58.711274 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.711397 kubelet[2670]: E0517 00:14:58.711383 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.712059 kubelet[2670]: E0517 00:14:58.712042 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.712253 kubelet[2670]: W0517 00:14:58.712238 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.713208 kubelet[2670]: E0517 00:14:58.713084 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.714626 kubelet[2670]: E0517 00:14:58.714415 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.714626 kubelet[2670]: W0517 00:14:58.714540 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.715587 kubelet[2670]: E0517 00:14:58.714770 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.716088 kubelet[2670]: E0517 00:14:58.716074 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.716172 kubelet[2670]: W0517 00:14:58.716154 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.716302 kubelet[2670]: E0517 00:14:58.716287 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.717715 kubelet[2670]: E0517 00:14:58.717608 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.717715 kubelet[2670]: W0517 00:14:58.717623 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.717902 kubelet[2670]: E0517 00:14:58.717855 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.719294 kubelet[2670]: E0517 00:14:58.719023 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.719294 kubelet[2670]: W0517 00:14:58.719051 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.719294 kubelet[2670]: E0517 00:14:58.719068 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.719683 kubelet[2670]: E0517 00:14:58.719530 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.721638 kubelet[2670]: W0517 00:14:58.719743 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.721638 kubelet[2670]: E0517 00:14:58.721532 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.721897 kubelet[2670]: E0517 00:14:58.721883 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.722077 kubelet[2670]: W0517 00:14:58.722063 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.722135 kubelet[2670]: E0517 00:14:58.722125 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.723574 kubelet[2670]: E0517 00:14:58.723558 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.724019 kubelet[2670]: W0517 00:14:58.723650 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.724019 kubelet[2670]: E0517 00:14:58.723671 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.748123 containerd[1479]: time="2025-05-17T00:14:58.746827631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:58.748395 containerd[1479]: time="2025-05-17T00:14:58.746906752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:58.748395 containerd[1479]: time="2025-05-17T00:14:58.746923152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:58.748395 containerd[1479]: time="2025-05-17T00:14:58.747016472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:58.792650 systemd[1]: Started cri-containerd-f711b02169b7c380877771fbc92dbcc17d5638184a678134d8ef1643d8a338b2.scope - libcontainer container f711b02169b7c380877771fbc92dbcc17d5638184a678134d8ef1643d8a338b2. May 17 00:14:58.827179 kubelet[2670]: E0517 00:14:58.826974 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.827179 kubelet[2670]: W0517 00:14:58.826996 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.827179 kubelet[2670]: E0517 00:14:58.827019 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.827179 kubelet[2670]: I0517 00:14:58.827048 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bf41328e-d1ed-475d-9a4a-c70bc9451b6f-varrun\") pod \"csi-node-driver-fhb48\" (UID: \"bf41328e-d1ed-475d-9a4a-c70bc9451b6f\") " pod="calico-system/csi-node-driver-fhb48" May 17 00:14:58.828149 kubelet[2670]: E0517 00:14:58.827949 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.828149 kubelet[2670]: W0517 00:14:58.827970 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.828149 kubelet[2670]: E0517 00:14:58.827993 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.829768 kubelet[2670]: E0517 00:14:58.829554 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.829768 kubelet[2670]: W0517 00:14:58.829576 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.829768 kubelet[2670]: E0517 00:14:58.829599 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.830027 kubelet[2670]: E0517 00:14:58.830012 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.830172 kubelet[2670]: W0517 00:14:58.830084 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.830341 kubelet[2670]: E0517 00:14:58.830260 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.830341 kubelet[2670]: I0517 00:14:58.830294 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24kh2\" (UniqueName: \"kubernetes.io/projected/bf41328e-d1ed-475d-9a4a-c70bc9451b6f-kube-api-access-24kh2\") pod \"csi-node-driver-fhb48\" (UID: \"bf41328e-d1ed-475d-9a4a-c70bc9451b6f\") " pod="calico-system/csi-node-driver-fhb48" May 17 00:14:58.831084 kubelet[2670]: E0517 00:14:58.830930 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.831084 kubelet[2670]: W0517 00:14:58.830948 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.831084 kubelet[2670]: E0517 00:14:58.831038 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.831339 kubelet[2670]: E0517 00:14:58.831280 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.831339 kubelet[2670]: W0517 00:14:58.831292 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.831469 kubelet[2670]: E0517 00:14:58.831383 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.835671 kubelet[2670]: E0517 00:14:58.835552 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.835671 kubelet[2670]: W0517 00:14:58.835575 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.835892 kubelet[2670]: E0517 00:14:58.835789 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.836115 kubelet[2670]: E0517 00:14:58.836020 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.836115 kubelet[2670]: W0517 00:14:58.836030 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.836251 kubelet[2670]: E0517 00:14:58.836194 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.836459 kubelet[2670]: E0517 00:14:58.836393 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.836459 kubelet[2670]: W0517 00:14:58.836404 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.836748 kubelet[2670]: E0517 00:14:58.836636 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.836901 kubelet[2670]: E0517 00:14:58.836826 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.836901 kubelet[2670]: W0517 00:14:58.836837 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.837121 kubelet[2670]: E0517 00:14:58.837041 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.837368 kubelet[2670]: E0517 00:14:58.837281 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.837368 kubelet[2670]: W0517 00:14:58.837293 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.837368 kubelet[2670]: E0517 00:14:58.837306 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.837805 kubelet[2670]: E0517 00:14:58.837672 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.837805 kubelet[2670]: W0517 00:14:58.837683 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.837805 kubelet[2670]: E0517 00:14:58.837697 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.838157 kubelet[2670]: E0517 00:14:58.838029 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.838157 kubelet[2670]: W0517 00:14:58.838040 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.838157 kubelet[2670]: E0517 00:14:58.838057 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.838536 kubelet[2670]: E0517 00:14:58.838447 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.838536 kubelet[2670]: W0517 00:14:58.838461 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.838748 kubelet[2670]: E0517 00:14:58.838634 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.838908 kubelet[2670]: E0517 00:14:58.838852 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.838908 kubelet[2670]: W0517 00:14:58.838863 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.839014 kubelet[2670]: E0517 00:14:58.838952 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.839333 kubelet[2670]: E0517 00:14:58.839223 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.839333 kubelet[2670]: W0517 00:14:58.839235 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.839333 kubelet[2670]: E0517 00:14:58.839250 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.839884 kubelet[2670]: E0517 00:14:58.839720 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.839884 kubelet[2670]: W0517 00:14:58.839734 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.839884 kubelet[2670]: E0517 00:14:58.839754 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.840417 kubelet[2670]: E0517 00:14:58.840246 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.840417 kubelet[2670]: W0517 00:14:58.840258 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.840417 kubelet[2670]: E0517 00:14:58.840271 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.840946 kubelet[2670]: E0517 00:14:58.840799 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.840946 kubelet[2670]: W0517 00:14:58.840813 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.840946 kubelet[2670]: E0517 00:14:58.840828 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.841325 kubelet[2670]: E0517 00:14:58.841222 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.841325 kubelet[2670]: W0517 00:14:58.841233 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.841554 kubelet[2670]: E0517 00:14:58.841443 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.841780 kubelet[2670]: E0517 00:14:58.841732 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.841780 kubelet[2670]: W0517 00:14:58.841743 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.841780 kubelet[2670]: E0517 00:14:58.841755 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.843747 containerd[1479]: time="2025-05-17T00:14:58.843698710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-42drg,Uid:543508c5-ffbb-4f06-b840-30a83bf61621,Namespace:calico-system,Attempt:0,} returns sandbox id \"f711b02169b7c380877771fbc92dbcc17d5638184a678134d8ef1643d8a338b2\"" May 17 00:14:58.940621 kubelet[2670]: E0517 00:14:58.940549 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.940621 kubelet[2670]: W0517 00:14:58.940580 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.940621 kubelet[2670]: E0517 00:14:58.940603 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.942246 kubelet[2670]: E0517 00:14:58.942213 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.942246 kubelet[2670]: W0517 00:14:58.942239 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.942663 kubelet[2670]: E0517 00:14:58.942293 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.943052 kubelet[2670]: E0517 00:14:58.943011 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.943052 kubelet[2670]: W0517 00:14:58.943031 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.943198 kubelet[2670]: E0517 00:14:58.943111 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.943378 kubelet[2670]: E0517 00:14:58.943362 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.943378 kubelet[2670]: W0517 00:14:58.943375 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.943602 kubelet[2670]: E0517 00:14:58.943391 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.943645 kubelet[2670]: E0517 00:14:58.943636 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.943711 kubelet[2670]: W0517 00:14:58.943646 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.943711 kubelet[2670]: E0517 00:14:58.943666 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.943846 kubelet[2670]: E0517 00:14:58.943832 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.943846 kubelet[2670]: W0517 00:14:58.943843 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.943915 kubelet[2670]: E0517 00:14:58.943857 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.944072 kubelet[2670]: E0517 00:14:58.944060 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.944072 kubelet[2670]: W0517 00:14:58.944070 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.944163 kubelet[2670]: E0517 00:14:58.944083 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.944252 kubelet[2670]: E0517 00:14:58.944238 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.944252 kubelet[2670]: W0517 00:14:58.944248 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.944327 kubelet[2670]: E0517 00:14:58.944260 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.944403 kubelet[2670]: E0517 00:14:58.944385 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.944403 kubelet[2670]: W0517 00:14:58.944399 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.944539 kubelet[2670]: E0517 00:14:58.944408 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.944643 kubelet[2670]: E0517 00:14:58.944630 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.944643 kubelet[2670]: W0517 00:14:58.944640 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.944754 kubelet[2670]: E0517 00:14:58.944651 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:58.961843 kubelet[2670]: E0517 00:14:58.961803 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:58.961843 kubelet[2670]: W0517 00:14:58.961832 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:58.961982 kubelet[2670]: E0517 00:14:58.961853 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.055347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3725201377.mount: Deactivated successfully. May 17 00:15:00.586466 containerd[1479]: time="2025-05-17T00:15:00.586399567Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:00.587461 containerd[1479]: time="2025-05-17T00:15:00.587324453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=33020269" May 17 00:15:00.589370 containerd[1479]: time="2025-05-17T00:15:00.589281666Z" level=info msg="ImageCreate event name:\"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:00.593995 containerd[1479]: time="2025-05-17T00:15:00.592150044Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:00.593995 containerd[1479]: time="2025-05-17T00:15:00.593460452Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"33020123\" in 1.904772005s" May 17 00:15:00.593995 containerd[1479]: time="2025-05-17T00:15:00.593538893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\"" May 17 00:15:00.595270 containerd[1479]: time="2025-05-17T00:15:00.595225543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 00:15:00.623564 containerd[1479]: time="2025-05-17T00:15:00.623494321Z" level=info msg="CreateContainer within sandbox \"e2b22d3948163d1cddb9e58affc8bf020a923c4899509bd97012e4d2ab43745d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 17 00:15:00.643569 containerd[1479]: time="2025-05-17T00:15:00.642905084Z" level=info msg="CreateContainer within sandbox \"e2b22d3948163d1cddb9e58affc8bf020a923c4899509bd97012e4d2ab43745d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"183e981d320f4cb98b706a661b925d610dfdec0a901112c315932f359ad19250\"" May 17 00:15:00.646349 containerd[1479]: time="2025-05-17T00:15:00.646291465Z" level=info msg="StartContainer for \"183e981d320f4cb98b706a661b925d610dfdec0a901112c315932f359ad19250\"" May 17 00:15:00.692682 systemd[1]: Started cri-containerd-183e981d320f4cb98b706a661b925d610dfdec0a901112c315932f359ad19250.scope - libcontainer container 183e981d320f4cb98b706a661b925d610dfdec0a901112c315932f359ad19250. May 17 00:15:00.740875 containerd[1479]: time="2025-05-17T00:15:00.740764661Z" level=info msg="StartContainer for \"183e981d320f4cb98b706a661b925d610dfdec0a901112c315932f359ad19250\" returns successfully" May 17 00:15:00.754084 kubelet[2670]: E0517 00:15:00.754031 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fhb48" podUID="bf41328e-d1ed-475d-9a4a-c70bc9451b6f" May 17 00:15:00.939855 kubelet[2670]: E0517 00:15:00.939733 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.939855 kubelet[2670]: W0517 00:15:00.939762 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.939855 kubelet[2670]: E0517 00:15:00.939786 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.942680 kubelet[2670]: E0517 00:15:00.942642 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.942823 kubelet[2670]: W0517 00:15:00.942692 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.942823 kubelet[2670]: E0517 00:15:00.942780 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.943120 kubelet[2670]: E0517 00:15:00.943102 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.943120 kubelet[2670]: W0517 00:15:00.943118 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.943201 kubelet[2670]: E0517 00:15:00.943134 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.943335 kubelet[2670]: E0517 00:15:00.943320 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.943335 kubelet[2670]: W0517 00:15:00.943331 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.943399 kubelet[2670]: E0517 00:15:00.943341 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.943627 kubelet[2670]: E0517 00:15:00.943580 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.943627 kubelet[2670]: W0517 00:15:00.943596 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.943627 kubelet[2670]: E0517 00:15:00.943608 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.943810 kubelet[2670]: E0517 00:15:00.943795 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.943810 kubelet[2670]: W0517 00:15:00.943806 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.943875 kubelet[2670]: E0517 00:15:00.943816 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.943988 kubelet[2670]: E0517 00:15:00.943965 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.943988 kubelet[2670]: W0517 00:15:00.943977 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.943988 kubelet[2670]: E0517 00:15:00.943985 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.944617 kubelet[2670]: E0517 00:15:00.944136 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.944617 kubelet[2670]: W0517 00:15:00.944144 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.944617 kubelet[2670]: E0517 00:15:00.944151 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.944617 kubelet[2670]: E0517 00:15:00.944322 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.944617 kubelet[2670]: W0517 00:15:00.944331 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.944617 kubelet[2670]: E0517 00:15:00.944340 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.944617 kubelet[2670]: E0517 00:15:00.944498 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.944617 kubelet[2670]: W0517 00:15:00.944522 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.944617 kubelet[2670]: E0517 00:15:00.944531 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.944832 kubelet[2670]: E0517 00:15:00.944776 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.944832 kubelet[2670]: W0517 00:15:00.944787 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.944832 kubelet[2670]: E0517 00:15:00.944798 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.944993 kubelet[2670]: E0517 00:15:00.944979 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.944993 kubelet[2670]: W0517 00:15:00.944989 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.945049 kubelet[2670]: E0517 00:15:00.944999 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.945187 kubelet[2670]: E0517 00:15:00.945172 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.945187 kubelet[2670]: W0517 00:15:00.945182 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.945251 kubelet[2670]: E0517 00:15:00.945200 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.945367 kubelet[2670]: E0517 00:15:00.945337 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.945367 kubelet[2670]: W0517 00:15:00.945355 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.945367 kubelet[2670]: E0517 00:15:00.945364 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.945812 kubelet[2670]: E0517 00:15:00.945533 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.945812 kubelet[2670]: W0517 00:15:00.945543 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.945812 kubelet[2670]: E0517 00:15:00.945554 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.957068 kubelet[2670]: E0517 00:15:00.957036 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.957325 kubelet[2670]: W0517 00:15:00.957192 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.957325 kubelet[2670]: E0517 00:15:00.957222 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.957861 kubelet[2670]: E0517 00:15:00.957843 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.958034 kubelet[2670]: W0517 00:15:00.957929 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.958034 kubelet[2670]: E0517 00:15:00.957955 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.958890 kubelet[2670]: E0517 00:15:00.958717 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.958890 kubelet[2670]: W0517 00:15:00.958740 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.958890 kubelet[2670]: E0517 00:15:00.958767 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.960147 kubelet[2670]: E0517 00:15:00.959023 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.960147 kubelet[2670]: W0517 00:15:00.959035 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.960147 kubelet[2670]: E0517 00:15:00.959071 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.960147 kubelet[2670]: E0517 00:15:00.959301 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.960147 kubelet[2670]: W0517 00:15:00.959311 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.960147 kubelet[2670]: E0517 00:15:00.959336 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.960147 kubelet[2670]: E0517 00:15:00.960042 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.960147 kubelet[2670]: W0517 00:15:00.960058 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.961076 kubelet[2670]: E0517 00:15:00.960455 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.961577 kubelet[2670]: E0517 00:15:00.961420 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.961577 kubelet[2670]: W0517 00:15:00.961466 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.961922 kubelet[2670]: E0517 00:15:00.961553 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.962185 kubelet[2670]: E0517 00:15:00.961905 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.962185 kubelet[2670]: W0517 00:15:00.961992 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.962185 kubelet[2670]: E0517 00:15:00.962011 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.962670 kubelet[2670]: E0517 00:15:00.962626 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.962670 kubelet[2670]: W0517 00:15:00.962643 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.966084 kubelet[2670]: E0517 00:15:00.963600 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.966084 kubelet[2670]: E0517 00:15:00.963801 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.966084 kubelet[2670]: W0517 00:15:00.963814 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.966084 kubelet[2670]: E0517 00:15:00.963835 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.966084 kubelet[2670]: E0517 00:15:00.964083 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.966084 kubelet[2670]: W0517 00:15:00.964100 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.966084 kubelet[2670]: E0517 00:15:00.964122 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.966084 kubelet[2670]: E0517 00:15:00.964303 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.966084 kubelet[2670]: W0517 00:15:00.964312 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.966084 kubelet[2670]: E0517 00:15:00.964327 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.966359 kubelet[2670]: E0517 00:15:00.965651 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.966359 kubelet[2670]: W0517 00:15:00.965668 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.966359 kubelet[2670]: E0517 00:15:00.965695 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.966359 kubelet[2670]: E0517 00:15:00.965887 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.966359 kubelet[2670]: W0517 00:15:00.965896 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.966359 kubelet[2670]: E0517 00:15:00.965928 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.967007 kubelet[2670]: E0517 00:15:00.966582 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.967007 kubelet[2670]: W0517 00:15:00.966597 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.967007 kubelet[2670]: E0517 00:15:00.966622 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.967007 kubelet[2670]: E0517 00:15:00.966946 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.967007 kubelet[2670]: W0517 00:15:00.966960 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.967007 kubelet[2670]: E0517 00:15:00.966983 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.969023 kubelet[2670]: E0517 00:15:00.968834 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.969023 kubelet[2670]: W0517 00:15:00.968855 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.969023 kubelet[2670]: E0517 00:15:00.968888 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:00.969184 kubelet[2670]: E0517 00:15:00.969171 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:00.969242 kubelet[2670]: W0517 00:15:00.969230 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:00.969317 kubelet[2670]: E0517 00:15:00.969285 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.891173 kubelet[2670]: I0517 00:15:01.891057 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:15:01.951649 kubelet[2670]: E0517 00:15:01.951605 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.951649 kubelet[2670]: W0517 00:15:01.951638 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.951869 kubelet[2670]: E0517 00:15:01.951672 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.952059 kubelet[2670]: E0517 00:15:01.951982 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.952059 kubelet[2670]: W0517 00:15:01.952029 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.952059 kubelet[2670]: E0517 00:15:01.952048 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.952492 kubelet[2670]: E0517 00:15:01.952459 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.952492 kubelet[2670]: W0517 00:15:01.952485 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.952665 kubelet[2670]: E0517 00:15:01.952527 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.952838 kubelet[2670]: E0517 00:15:01.952804 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.952838 kubelet[2670]: W0517 00:15:01.952821 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.952838 kubelet[2670]: E0517 00:15:01.952835 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.953062 kubelet[2670]: E0517 00:15:01.953043 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.953062 kubelet[2670]: W0517 00:15:01.953057 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.953205 kubelet[2670]: E0517 00:15:01.953068 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.953314 kubelet[2670]: E0517 00:15:01.953270 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.953314 kubelet[2670]: W0517 00:15:01.953291 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.953314 kubelet[2670]: E0517 00:15:01.953308 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.953658 kubelet[2670]: E0517 00:15:01.953604 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.953658 kubelet[2670]: W0517 00:15:01.953623 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.953658 kubelet[2670]: E0517 00:15:01.953637 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.953891 kubelet[2670]: E0517 00:15:01.953875 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.953891 kubelet[2670]: W0517 00:15:01.953890 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.954016 kubelet[2670]: E0517 00:15:01.953903 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.954119 kubelet[2670]: E0517 00:15:01.954105 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.954119 kubelet[2670]: W0517 00:15:01.954118 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.954230 kubelet[2670]: E0517 00:15:01.954130 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.954324 kubelet[2670]: E0517 00:15:01.954305 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.954324 kubelet[2670]: W0517 00:15:01.954321 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.954492 kubelet[2670]: E0517 00:15:01.954333 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.954624 kubelet[2670]: E0517 00:15:01.954569 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.954624 kubelet[2670]: W0517 00:15:01.954581 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.954624 kubelet[2670]: E0517 00:15:01.954595 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.954808 kubelet[2670]: E0517 00:15:01.954785 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.954808 kubelet[2670]: W0517 00:15:01.954801 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.954928 kubelet[2670]: E0517 00:15:01.954811 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.955011 kubelet[2670]: E0517 00:15:01.954991 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.955011 kubelet[2670]: W0517 00:15:01.955006 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.955129 kubelet[2670]: E0517 00:15:01.955016 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.955205 kubelet[2670]: E0517 00:15:01.955187 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.955205 kubelet[2670]: W0517 00:15:01.955200 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.955317 kubelet[2670]: E0517 00:15:01.955211 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.955405 kubelet[2670]: E0517 00:15:01.955379 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.955405 kubelet[2670]: W0517 00:15:01.955398 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.955569 kubelet[2670]: E0517 00:15:01.955414 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.969256 kubelet[2670]: E0517 00:15:01.969211 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.969949 kubelet[2670]: W0517 00:15:01.969629 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.969949 kubelet[2670]: E0517 00:15:01.969686 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.970405 kubelet[2670]: E0517 00:15:01.970374 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.970767 kubelet[2670]: W0517 00:15:01.970606 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.970767 kubelet[2670]: E0517 00:15:01.970670 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.971002 kubelet[2670]: E0517 00:15:01.970980 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.971056 kubelet[2670]: W0517 00:15:01.971002 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.971056 kubelet[2670]: E0517 00:15:01.971027 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.971243 kubelet[2670]: E0517 00:15:01.971228 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.971243 kubelet[2670]: W0517 00:15:01.971243 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.971313 kubelet[2670]: E0517 00:15:01.971262 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.971476 kubelet[2670]: E0517 00:15:01.971463 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.971476 kubelet[2670]: W0517 00:15:01.971476 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.971588 kubelet[2670]: E0517 00:15:01.971493 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.971819 kubelet[2670]: E0517 00:15:01.971795 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.971819 kubelet[2670]: W0517 00:15:01.971817 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.971904 kubelet[2670]: E0517 00:15:01.971836 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.972312 kubelet[2670]: E0517 00:15:01.972186 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.972312 kubelet[2670]: W0517 00:15:01.972205 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.972312 kubelet[2670]: E0517 00:15:01.972228 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.972568 kubelet[2670]: E0517 00:15:01.972485 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.972568 kubelet[2670]: W0517 00:15:01.972532 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.972568 kubelet[2670]: E0517 00:15:01.972555 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.972767 kubelet[2670]: E0517 00:15:01.972755 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.972767 kubelet[2670]: W0517 00:15:01.972767 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.972834 kubelet[2670]: E0517 00:15:01.972787 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.973021 kubelet[2670]: E0517 00:15:01.973007 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.973021 kubelet[2670]: W0517 00:15:01.973020 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.973074 kubelet[2670]: E0517 00:15:01.973035 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.973283 kubelet[2670]: E0517 00:15:01.973268 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.973283 kubelet[2670]: W0517 00:15:01.973282 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.973346 kubelet[2670]: E0517 00:15:01.973293 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.973902 kubelet[2670]: E0517 00:15:01.973882 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.973902 kubelet[2670]: W0517 00:15:01.973899 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.974062 kubelet[2670]: E0517 00:15:01.974010 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.974106 kubelet[2670]: E0517 00:15:01.974078 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.974106 kubelet[2670]: W0517 00:15:01.974087 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.974182 kubelet[2670]: E0517 00:15:01.974168 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.974377 kubelet[2670]: E0517 00:15:01.974361 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.974377 kubelet[2670]: W0517 00:15:01.974376 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.974455 kubelet[2670]: E0517 00:15:01.974386 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.974644 kubelet[2670]: E0517 00:15:01.974622 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.974644 kubelet[2670]: W0517 00:15:01.974637 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.974721 kubelet[2670]: E0517 00:15:01.974648 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.974811 kubelet[2670]: E0517 00:15:01.974800 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.974811 kubelet[2670]: W0517 00:15:01.974810 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.974856 kubelet[2670]: E0517 00:15:01.974819 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.975058 kubelet[2670]: E0517 00:15:01.975028 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.975058 kubelet[2670]: W0517 00:15:01.975042 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.975058 kubelet[2670]: E0517 00:15:01.975051 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:01.975414 kubelet[2670]: E0517 00:15:01.975398 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:15:01.975414 kubelet[2670]: W0517 00:15:01.975413 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:15:01.975523 kubelet[2670]: E0517 00:15:01.975440 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:15:02.207933 containerd[1479]: time="2025-05-17T00:15:02.207625078Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:02.212180 containerd[1479]: time="2025-05-17T00:15:02.212100145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4264304" May 17 00:15:02.213865 containerd[1479]: time="2025-05-17T00:15:02.213543153Z" level=info msg="ImageCreate event name:\"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:02.216771 containerd[1479]: time="2025-05-17T00:15:02.216703332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:02.217930 containerd[1479]: time="2025-05-17T00:15:02.217874340Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5633505\" in 1.622602516s" May 17 00:15:02.218032 containerd[1479]: time="2025-05-17T00:15:02.217935220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\"" May 17 00:15:02.222709 containerd[1479]: time="2025-05-17T00:15:02.222447087Z" level=info msg="CreateContainer within sandbox \"f711b02169b7c380877771fbc92dbcc17d5638184a678134d8ef1643d8a338b2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 00:15:02.242206 containerd[1479]: time="2025-05-17T00:15:02.242029085Z" level=info msg="CreateContainer within sandbox \"f711b02169b7c380877771fbc92dbcc17d5638184a678134d8ef1643d8a338b2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"cd003fde8dcfc674d77ad38327b1465473f0cabd5f583f43abc14acb1d05cda0\"" May 17 00:15:02.243634 containerd[1479]: time="2025-05-17T00:15:02.243543334Z" level=info msg="StartContainer for \"cd003fde8dcfc674d77ad38327b1465473f0cabd5f583f43abc14acb1d05cda0\"" May 17 00:15:02.285647 systemd[1]: Started cri-containerd-cd003fde8dcfc674d77ad38327b1465473f0cabd5f583f43abc14acb1d05cda0.scope - libcontainer container cd003fde8dcfc674d77ad38327b1465473f0cabd5f583f43abc14acb1d05cda0. May 17 00:15:02.322474 containerd[1479]: time="2025-05-17T00:15:02.320960242Z" level=info msg="StartContainer for \"cd003fde8dcfc674d77ad38327b1465473f0cabd5f583f43abc14acb1d05cda0\" returns successfully" May 17 00:15:02.336121 systemd[1]: cri-containerd-cd003fde8dcfc674d77ad38327b1465473f0cabd5f583f43abc14acb1d05cda0.scope: Deactivated successfully. May 17 00:15:02.363023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd003fde8dcfc674d77ad38327b1465473f0cabd5f583f43abc14acb1d05cda0-rootfs.mount: Deactivated successfully. May 17 00:15:02.523365 containerd[1479]: time="2025-05-17T00:15:02.523272623Z" level=info msg="shim disconnected" id=cd003fde8dcfc674d77ad38327b1465473f0cabd5f583f43abc14acb1d05cda0 namespace=k8s.io May 17 00:15:02.523365 containerd[1479]: time="2025-05-17T00:15:02.523357423Z" level=warning msg="cleaning up after shim disconnected" id=cd003fde8dcfc674d77ad38327b1465473f0cabd5f583f43abc14acb1d05cda0 namespace=k8s.io May 17 00:15:02.523365 containerd[1479]: time="2025-05-17T00:15:02.523367983Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:15:02.754097 kubelet[2670]: E0517 00:15:02.753724 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fhb48" podUID="bf41328e-d1ed-475d-9a4a-c70bc9451b6f" May 17 00:15:02.899673 containerd[1479]: time="2025-05-17T00:15:02.898808649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 00:15:02.922547 kubelet[2670]: I0517 00:15:02.922464 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-ff4f9479b-sj7q5" podStartSLOduration=3.015331973 podStartE2EDuration="4.922442872s" podCreationTimestamp="2025-05-17 00:14:58 +0000 UTC" firstStartedPulling="2025-05-17 00:14:58.687988003 +0000 UTC m=+23.064215053" lastFinishedPulling="2025-05-17 00:15:00.595098902 +0000 UTC m=+24.971325952" observedRunningTime="2025-05-17 00:15:00.914267995 +0000 UTC m=+25.290495005" watchObservedRunningTime="2025-05-17 00:15:02.922442872 +0000 UTC m=+27.298669922" May 17 00:15:04.754666 kubelet[2670]: E0517 00:15:04.754081 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fhb48" podUID="bf41328e-d1ed-475d-9a4a-c70bc9451b6f" May 17 00:15:06.753255 kubelet[2670]: E0517 00:15:06.753199 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fhb48" podUID="bf41328e-d1ed-475d-9a4a-c70bc9451b6f" May 17 00:15:06.938542 containerd[1479]: time="2025-05-17T00:15:06.937353201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:06.939561 containerd[1479]: time="2025-05-17T00:15:06.939460293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=65748976" May 17 00:15:06.940038 containerd[1479]: time="2025-05-17T00:15:06.939995336Z" level=info msg="ImageCreate event name:\"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:06.943347 containerd[1479]: time="2025-05-17T00:15:06.943281514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:06.944958 containerd[1479]: time="2025-05-17T00:15:06.944895603Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"67118217\" in 4.046043034s" May 17 00:15:06.945535 containerd[1479]: time="2025-05-17T00:15:06.945086204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\"" May 17 00:15:06.949218 containerd[1479]: time="2025-05-17T00:15:06.949181547Z" level=info msg="CreateContainer within sandbox \"f711b02169b7c380877771fbc92dbcc17d5638184a678134d8ef1643d8a338b2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 00:15:06.969765 containerd[1479]: time="2025-05-17T00:15:06.969710421Z" level=info msg="CreateContainer within sandbox \"f711b02169b7c380877771fbc92dbcc17d5638184a678134d8ef1643d8a338b2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8f86f5fe110121d6ad443339c76d6b4396dd2a2ff8bf86c2601884352676c1cc\"" May 17 00:15:06.971085 containerd[1479]: time="2025-05-17T00:15:06.970461265Z" level=info msg="StartContainer for \"8f86f5fe110121d6ad443339c76d6b4396dd2a2ff8bf86c2601884352676c1cc\"" May 17 00:15:07.004667 systemd[1]: run-containerd-runc-k8s.io-8f86f5fe110121d6ad443339c76d6b4396dd2a2ff8bf86c2601884352676c1cc-runc.BKHGgL.mount: Deactivated successfully. May 17 00:15:07.014782 systemd[1]: Started cri-containerd-8f86f5fe110121d6ad443339c76d6b4396dd2a2ff8bf86c2601884352676c1cc.scope - libcontainer container 8f86f5fe110121d6ad443339c76d6b4396dd2a2ff8bf86c2601884352676c1cc. May 17 00:15:07.048308 containerd[1479]: time="2025-05-17T00:15:07.048254691Z" level=info msg="StartContainer for \"8f86f5fe110121d6ad443339c76d6b4396dd2a2ff8bf86c2601884352676c1cc\" returns successfully" May 17 00:15:07.598088 containerd[1479]: time="2025-05-17T00:15:07.597605112Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:15:07.598305 systemd[1]: cri-containerd-8f86f5fe110121d6ad443339c76d6b4396dd2a2ff8bf86c2601884352676c1cc.scope: Deactivated successfully. May 17 00:15:07.621911 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f86f5fe110121d6ad443339c76d6b4396dd2a2ff8bf86c2601884352676c1cc-rootfs.mount: Deactivated successfully. May 17 00:15:07.710824 kubelet[2670]: I0517 00:15:07.708261 2670 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 17 00:15:07.725974 containerd[1479]: time="2025-05-17T00:15:07.725848969Z" level=info msg="shim disconnected" id=8f86f5fe110121d6ad443339c76d6b4396dd2a2ff8bf86c2601884352676c1cc namespace=k8s.io May 17 00:15:07.726303 containerd[1479]: time="2025-05-17T00:15:07.726263131Z" level=warning msg="cleaning up after shim disconnected" id=8f86f5fe110121d6ad443339c76d6b4396dd2a2ff8bf86c2601884352676c1cc namespace=k8s.io May 17 00:15:07.726470 containerd[1479]: time="2025-05-17T00:15:07.726413612Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:15:07.762924 kubelet[2670]: I0517 00:15:07.762887 2670 status_manager.go:890] "Failed to get status for pod" podUID="b30848da-8b46-4cdc-baaa-f3567b6377c3" pod="kube-system/coredns-668d6bf9bc-n4blv" err="pods \"coredns-668d6bf9bc-n4blv\" is forbidden: User \"system:node:ci-4081-3-3-n-16326e39d6\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-3-n-16326e39d6' and this object" May 17 00:15:07.764758 kubelet[2670]: W0517 00:15:07.764627 2670 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4081-3-3-n-16326e39d6" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-3-n-16326e39d6' and this object May 17 00:15:07.764758 kubelet[2670]: E0517 00:15:07.764681 2670 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4081-3-3-n-16326e39d6\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-3-n-16326e39d6' and this object" logger="UnhandledError" May 17 00:15:07.768271 systemd[1]: Created slice kubepods-burstable-podb30848da_8b46_4cdc_baaa_f3567b6377c3.slice - libcontainer container kubepods-burstable-podb30848da_8b46_4cdc_baaa_f3567b6377c3.slice. May 17 00:15:07.780473 systemd[1]: Created slice kubepods-burstable-poda7794f9e_6b8e_4656_8525_16c2f94584b5.slice - libcontainer container kubepods-burstable-poda7794f9e_6b8e_4656_8525_16c2f94584b5.slice. May 17 00:15:07.802921 systemd[1]: Created slice kubepods-besteffort-pod8165c08e_7c9f_40c3_8125_d662038241a2.slice - libcontainer container kubepods-besteffort-pod8165c08e_7c9f_40c3_8125_d662038241a2.slice. May 17 00:15:07.818753 kubelet[2670]: I0517 00:15:07.818165 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk5mc\" (UniqueName: \"kubernetes.io/projected/a7794f9e-6b8e-4656-8525-16c2f94584b5-kube-api-access-kk5mc\") pod \"coredns-668d6bf9bc-wpn2m\" (UID: \"a7794f9e-6b8e-4656-8525-16c2f94584b5\") " pod="kube-system/coredns-668d6bf9bc-wpn2m" May 17 00:15:07.818753 kubelet[2670]: I0517 00:15:07.818683 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e627f94-1f93-4753-9c4f-f74ed6f6b9da-whisker-ca-bundle\") pod \"whisker-66cc8f5467-bmsjq\" (UID: \"2e627f94-1f93-4753-9c4f-f74ed6f6b9da\") " pod="calico-system/whisker-66cc8f5467-bmsjq" May 17 00:15:07.818753 kubelet[2670]: I0517 00:15:07.818705 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfjtc\" (UniqueName: \"kubernetes.io/projected/753684c6-cd41-4791-9ed7-725f4728c2a4-kube-api-access-mfjtc\") pod \"calico-apiserver-6cbf8c7948-6pm57\" (UID: \"753684c6-cd41-4791-9ed7-725f4728c2a4\") " pod="calico-apiserver/calico-apiserver-6cbf8c7948-6pm57" May 17 00:15:07.818945 kubelet[2670]: I0517 00:15:07.818862 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qfqj\" (UniqueName: \"kubernetes.io/projected/2e627f94-1f93-4753-9c4f-f74ed6f6b9da-kube-api-access-7qfqj\") pod \"whisker-66cc8f5467-bmsjq\" (UID: \"2e627f94-1f93-4753-9c4f-f74ed6f6b9da\") " pod="calico-system/whisker-66cc8f5467-bmsjq" May 17 00:15:07.818945 kubelet[2670]: I0517 00:15:07.818887 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a9b260fc-ff83-4de9-ac43-723c22c032c2-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-fqppr\" (UID: \"a9b260fc-ff83-4de9-ac43-723c22c032c2\") " pod="calico-system/goldmane-78d55f7ddc-fqppr" May 17 00:15:07.818945 kubelet[2670]: I0517 00:15:07.818903 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nttbr\" (UniqueName: \"kubernetes.io/projected/8165c08e-7c9f-40c3-8125-d662038241a2-kube-api-access-nttbr\") pod \"calico-kube-controllers-7d6599b8b4-52gm9\" (UID: \"8165c08e-7c9f-40c3-8125-d662038241a2\") " pod="calico-system/calico-kube-controllers-7d6599b8b4-52gm9" May 17 00:15:07.818945 kubelet[2670]: I0517 00:15:07.818923 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b30848da-8b46-4cdc-baaa-f3567b6377c3-config-volume\") pod \"coredns-668d6bf9bc-n4blv\" (UID: \"b30848da-8b46-4cdc-baaa-f3567b6377c3\") " pod="kube-system/coredns-668d6bf9bc-n4blv" May 17 00:15:07.819051 kubelet[2670]: I0517 00:15:07.818948 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9b260fc-ff83-4de9-ac43-723c22c032c2-config\") pod \"goldmane-78d55f7ddc-fqppr\" (UID: \"a9b260fc-ff83-4de9-ac43-723c22c032c2\") " pod="calico-system/goldmane-78d55f7ddc-fqppr" May 17 00:15:07.819051 kubelet[2670]: I0517 00:15:07.818964 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcg88\" (UniqueName: \"kubernetes.io/projected/a9b260fc-ff83-4de9-ac43-723c22c032c2-kube-api-access-vcg88\") pod \"goldmane-78d55f7ddc-fqppr\" (UID: \"a9b260fc-ff83-4de9-ac43-723c22c032c2\") " pod="calico-system/goldmane-78d55f7ddc-fqppr" May 17 00:15:07.819051 kubelet[2670]: I0517 00:15:07.818987 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7794f9e-6b8e-4656-8525-16c2f94584b5-config-volume\") pod \"coredns-668d6bf9bc-wpn2m\" (UID: \"a7794f9e-6b8e-4656-8525-16c2f94584b5\") " pod="kube-system/coredns-668d6bf9bc-wpn2m" May 17 00:15:07.819051 kubelet[2670]: I0517 00:15:07.819012 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9b260fc-ff83-4de9-ac43-723c22c032c2-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-fqppr\" (UID: \"a9b260fc-ff83-4de9-ac43-723c22c032c2\") " pod="calico-system/goldmane-78d55f7ddc-fqppr" May 17 00:15:07.819051 kubelet[2670]: I0517 00:15:07.819032 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2e627f94-1f93-4753-9c4f-f74ed6f6b9da-whisker-backend-key-pair\") pod \"whisker-66cc8f5467-bmsjq\" (UID: \"2e627f94-1f93-4753-9c4f-f74ed6f6b9da\") " pod="calico-system/whisker-66cc8f5467-bmsjq" May 17 00:15:07.819162 kubelet[2670]: I0517 00:15:07.819051 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ceb2f628-4f33-47aa-8305-d46713261d40-calico-apiserver-certs\") pod \"calico-apiserver-6cbf8c7948-hk4z6\" (UID: \"ceb2f628-4f33-47aa-8305-d46713261d40\") " pod="calico-apiserver/calico-apiserver-6cbf8c7948-hk4z6" May 17 00:15:07.819162 kubelet[2670]: I0517 00:15:07.819072 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgph2\" (UniqueName: \"kubernetes.io/projected/ceb2f628-4f33-47aa-8305-d46713261d40-kube-api-access-kgph2\") pod \"calico-apiserver-6cbf8c7948-hk4z6\" (UID: \"ceb2f628-4f33-47aa-8305-d46713261d40\") " pod="calico-apiserver/calico-apiserver-6cbf8c7948-hk4z6" May 17 00:15:07.819162 kubelet[2670]: I0517 00:15:07.819088 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/753684c6-cd41-4791-9ed7-725f4728c2a4-calico-apiserver-certs\") pod \"calico-apiserver-6cbf8c7948-6pm57\" (UID: \"753684c6-cd41-4791-9ed7-725f4728c2a4\") " pod="calico-apiserver/calico-apiserver-6cbf8c7948-6pm57" May 17 00:15:07.819162 kubelet[2670]: I0517 00:15:07.819107 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8165c08e-7c9f-40c3-8125-d662038241a2-tigera-ca-bundle\") pod \"calico-kube-controllers-7d6599b8b4-52gm9\" (UID: \"8165c08e-7c9f-40c3-8125-d662038241a2\") " pod="calico-system/calico-kube-controllers-7d6599b8b4-52gm9" May 17 00:15:07.819162 kubelet[2670]: I0517 00:15:07.819127 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzrt5\" (UniqueName: \"kubernetes.io/projected/b30848da-8b46-4cdc-baaa-f3567b6377c3-kube-api-access-vzrt5\") pod \"coredns-668d6bf9bc-n4blv\" (UID: \"b30848da-8b46-4cdc-baaa-f3567b6377c3\") " pod="kube-system/coredns-668d6bf9bc-n4blv" May 17 00:15:07.820576 systemd[1]: Created slice kubepods-besteffort-pod753684c6_cd41_4791_9ed7_725f4728c2a4.slice - libcontainer container kubepods-besteffort-pod753684c6_cd41_4791_9ed7_725f4728c2a4.slice. May 17 00:15:07.833929 systemd[1]: Created slice kubepods-besteffort-podceb2f628_4f33_47aa_8305_d46713261d40.slice - libcontainer container kubepods-besteffort-podceb2f628_4f33_47aa_8305_d46713261d40.slice. May 17 00:15:07.845358 systemd[1]: Created slice kubepods-besteffort-poda9b260fc_ff83_4de9_ac43_723c22c032c2.slice - libcontainer container kubepods-besteffort-poda9b260fc_ff83_4de9_ac43_723c22c032c2.slice. May 17 00:15:07.855471 systemd[1]: Created slice kubepods-besteffort-pod2e627f94_1f93_4753_9c4f_f74ed6f6b9da.slice - libcontainer container kubepods-besteffort-pod2e627f94_1f93_4753_9c4f_f74ed6f6b9da.slice. May 17 00:15:07.916369 containerd[1479]: time="2025-05-17T00:15:07.916291202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 00:15:08.118204 containerd[1479]: time="2025-05-17T00:15:08.118019684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d6599b8b4-52gm9,Uid:8165c08e-7c9f-40c3-8125-d662038241a2,Namespace:calico-system,Attempt:0,}" May 17 00:15:08.147378 containerd[1479]: time="2025-05-17T00:15:08.146420196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbf8c7948-hk4z6,Uid:ceb2f628-4f33-47aa-8305-d46713261d40,Namespace:calico-apiserver,Attempt:0,}" May 17 00:15:08.147378 containerd[1479]: time="2025-05-17T00:15:08.147132799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbf8c7948-6pm57,Uid:753684c6-cd41-4791-9ed7-725f4728c2a4,Namespace:calico-apiserver,Attempt:0,}" May 17 00:15:08.154534 containerd[1479]: time="2025-05-17T00:15:08.154461998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-fqppr,Uid:a9b260fc-ff83-4de9-ac43-723c22c032c2,Namespace:calico-system,Attempt:0,}" May 17 00:15:08.160800 containerd[1479]: time="2025-05-17T00:15:08.160337630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66cc8f5467-bmsjq,Uid:2e627f94-1f93-4753-9c4f-f74ed6f6b9da,Namespace:calico-system,Attempt:0,}" May 17 00:15:08.263052 containerd[1479]: time="2025-05-17T00:15:08.262895175Z" level=error msg="Failed to destroy network for sandbox \"648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:08.263836 containerd[1479]: time="2025-05-17T00:15:08.263708619Z" level=error msg="encountered an error cleaning up failed sandbox \"648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:08.263836 containerd[1479]: time="2025-05-17T00:15:08.263782340Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d6599b8b4-52gm9,Uid:8165c08e-7c9f-40c3-8125-d662038241a2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:08.265114 kubelet[2670]: E0517 00:15:08.264063 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:08.265114 kubelet[2670]: E0517 00:15:08.264132 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d6599b8b4-52gm9" May 17 00:15:08.265114 kubelet[2670]: E0517 00:15:08.264153 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d6599b8b4-52gm9" May 17 00:15:08.265448 kubelet[2670]: E0517 00:15:08.264204 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d6599b8b4-52gm9_calico-system(8165c08e-7c9f-40c3-8125-d662038241a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d6599b8b4-52gm9_calico-system(8165c08e-7c9f-40c3-8125-d662038241a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d6599b8b4-52gm9" podUID="8165c08e-7c9f-40c3-8125-d662038241a2" May 17 00:15:08.331161 containerd[1479]: time="2025-05-17T00:15:08.331024977Z" level=error msg="Failed to destroy network for sandbox \"86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:08.331795 containerd[1479]: time="2025-05-17T00:15:08.331584700Z" level=error msg="Failed to destroy network for sandbox \"7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:08.332172 containerd[1479]: time="2025-05-17T00:15:08.331733621Z" level=error msg="encountered an error cleaning up failed sandbox \"86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:08.332172 containerd[1479]: time="2025-05-17T00:15:08.331989022Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbf8c7948-hk4z6,Uid:ceb2f628-4f33-47aa-8305-d46713261d40,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:08.332353 kubelet[2670]: E0517 00:15:08.332192 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:08.332353 kubelet[2670]: E0517 00:15:08.332248 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbf8c7948-hk4z6" May 17 00:15:08.332353 kubelet[2670]: E0517 00:15:08.332266 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbf8c7948-hk4z6" May 17 00:15:08.333581 kubelet[2670]: E0517 00:15:08.332309 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cbf8c7948-hk4z6_calico-apiserver(ceb2f628-4f33-47aa-8305-d46713261d40)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cbf8c7948-hk4z6_calico-apiserver(ceb2f628-4f33-47aa-8305-d46713261d40)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cbf8c7948-hk4z6" podUID="ceb2f628-4f33-47aa-8305-d46713261d40" May 17 00:15:08.333581 kubelet[2670]: E0517 00:15:08.333381 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:08.333581 kubelet[2670]: E0517 00:15:08.333536 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbf8c7948-6pm57" May 17 00:15:08.333702 containerd[1479]: time="2025-05-17T00:15:08.333093668Z" level=error msg="encountered an error cleaning up failed sandbox \"7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:08.333702 containerd[1479]: time="2025-05-17T00:15:08.333159349Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbf8c7948-6pm57,Uid:753684c6-cd41-4791-9ed7-725f4728c2a4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:08.333808 kubelet[2670]: E0517 00:15:08.333562 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbf8c7948-6pm57" May 17 00:15:08.333808 kubelet[2670]: E0517 00:15:08.333604 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cbf8c7948-6pm57_calico-apiserver(753684c6-cd41-4791-9ed7-725f4728c2a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cbf8c7948-6pm57_calico-apiserver(753684c6-cd41-4791-9ed7-725f4728c2a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cbf8c7948-6pm57" podUID="753684c6-cd41-4791-9ed7-725f4728c2a4" May 17 00:15:08.349463 containerd[1479]: time="2025-05-17T00:15:08.349111793Z" level=error msg="Failed to destroy network for sandbox \"e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:08.351618 containerd[1479]: time="2025-05-17T00:15:08.349605676Z" level=error msg="encountered an error cleaning up failed sandbox \"e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:08.351618 containerd[1479]: time="2025-05-17T00:15:08.349676756Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-fqppr,Uid:a9b260fc-ff83-4de9-ac43-723c22c032c2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:08.352582 kubelet[2670]: E0517 00:15:08.349884 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:08.352582 kubelet[2670]: E0517 00:15:08.349937 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-fqppr" May 17 00:15:08.352582 kubelet[2670]: E0517 00:15:08.349957 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-fqppr" May 17 00:15:08.352792 kubelet[2670]: E0517 00:15:08.350000 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-fqppr_calico-system(a9b260fc-ff83-4de9-ac43-723c22c032c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-fqppr_calico-system(a9b260fc-ff83-4de9-ac43-723c22c032c2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-fqppr" podUID="a9b260fc-ff83-4de9-ac43-723c22c032c2" May 17 00:15:08.356291 containerd[1479]: time="2025-05-17T00:15:08.355701308Z" level=error msg="Failed to destroy network for sandbox \"b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:08.356709 containerd[1479]: time="2025-05-17T00:15:08.356246191Z" level=error msg="encountered an error cleaning up failed sandbox \"b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:08.357307 containerd[1479]: time="2025-05-17T00:15:08.357062356Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66cc8f5467-bmsjq,Uid:2e627f94-1f93-4753-9c4f-f74ed6f6b9da,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:08.357684 kubelet[2670]: E0517 00:15:08.357407 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:08.357684 kubelet[2670]: E0517 00:15:08.357611 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66cc8f5467-bmsjq" May 17 00:15:08.357684 kubelet[2670]: E0517 00:15:08.357649 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66cc8f5467-bmsjq" May 17 00:15:08.358882 kubelet[2670]: E0517 00:15:08.358814 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-66cc8f5467-bmsjq_calico-system(2e627f94-1f93-4753-9c4f-f74ed6f6b9da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-66cc8f5467-bmsjq_calico-system(2e627f94-1f93-4753-9c4f-f74ed6f6b9da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-66cc8f5467-bmsjq" podUID="2e627f94-1f93-4753-9c4f-f74ed6f6b9da" May 17 00:15:08.762111 systemd[1]: Created slice kubepods-besteffort-podbf41328e_d1ed_475d_9a4a_c70bc9451b6f.slice - libcontainer container kubepods-besteffort-podbf41328e_d1ed_475d_9a4a_c70bc9451b6f.slice. May 17 00:15:08.766719 containerd[1479]: time="2025-05-17T00:15:08.766099771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fhb48,Uid:bf41328e-d1ed-475d-9a4a-c70bc9451b6f,Namespace:calico-system,Attempt:0,}" May 17 00:15:08.839104 containerd[1479]: time="2025-05-17T00:15:08.839052279Z" level=error msg="Failed to destroy network for sandbox \"d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:08.839406 containerd[1479]: time="2025-05-17T00:15:08.839377161Z" level=error msg="encountered an error cleaning up failed sandbox \"d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:08.839527 containerd[1479]: time="2025-05-17T00:15:08.839460641Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fhb48,Uid:bf41328e-d1ed-475d-9a4a-c70bc9451b6f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:08.839732 kubelet[2670]: E0517 00:15:08.839696 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:08.841649 kubelet[2670]: E0517 00:15:08.839760 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fhb48" May 17 00:15:08.841649 kubelet[2670]: E0517 00:15:08.839781 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fhb48" May 17 00:15:08.841649 kubelet[2670]: E0517 00:15:08.839835 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fhb48_calico-system(bf41328e-d1ed-475d-9a4a-c70bc9451b6f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fhb48_calico-system(bf41328e-d1ed-475d-9a4a-c70bc9451b6f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fhb48" podUID="bf41328e-d1ed-475d-9a4a-c70bc9451b6f" May 17 00:15:08.919062 kubelet[2670]: I0517 00:15:08.919030 2670 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" May 17 00:15:08.921915 containerd[1479]: time="2025-05-17T00:15:08.921827159Z" level=info msg="StopPodSandbox for \"d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9\"" May 17 00:15:08.922121 kubelet[2670]: I0517 00:15:08.922090 2670 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" May 17 00:15:08.922959 containerd[1479]: time="2025-05-17T00:15:08.922903245Z" level=info msg="Ensure that sandbox d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9 in task-service has been cleanup successfully" May 17 00:15:08.923304 containerd[1479]: time="2025-05-17T00:15:08.923159606Z" level=info msg="StopPodSandbox for \"e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c\"" May 17 00:15:08.923550 containerd[1479]: time="2025-05-17T00:15:08.923410167Z" level=info msg="Ensure that sandbox e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c in task-service has been cleanup successfully" May 17 00:15:08.926985 kubelet[2670]: E0517 00:15:08.926833 2670 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 17 00:15:08.926985 kubelet[2670]: E0517 00:15:08.926920 2670 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b30848da-8b46-4cdc-baaa-f3567b6377c3-config-volume podName:b30848da-8b46-4cdc-baaa-f3567b6377c3 nodeName:}" failed. No retries permitted until 2025-05-17 00:15:09.426897786 +0000 UTC m=+33.803124836 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b30848da-8b46-4cdc-baaa-f3567b6377c3-config-volume") pod "coredns-668d6bf9bc-n4blv" (UID: "b30848da-8b46-4cdc-baaa-f3567b6377c3") : failed to sync configmap cache: timed out waiting for the condition May 17 00:15:08.928877 kubelet[2670]: E0517 00:15:08.928029 2670 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 17 00:15:08.928877 kubelet[2670]: E0517 00:15:08.928099 2670 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a7794f9e-6b8e-4656-8525-16c2f94584b5-config-volume podName:a7794f9e-6b8e-4656-8525-16c2f94584b5 nodeName:}" failed. No retries permitted until 2025-05-17 00:15:09.428082712 +0000 UTC m=+33.804309762 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a7794f9e-6b8e-4656-8525-16c2f94584b5-config-volume") pod "coredns-668d6bf9bc-wpn2m" (UID: "a7794f9e-6b8e-4656-8525-16c2f94584b5") : failed to sync configmap cache: timed out waiting for the condition May 17 00:15:08.928877 kubelet[2670]: I0517 00:15:08.928843 2670 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" May 17 00:15:08.929384 containerd[1479]: time="2025-05-17T00:15:08.929342879Z" level=info msg="StopPodSandbox for \"7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc\"" May 17 00:15:08.931956 containerd[1479]: time="2025-05-17T00:15:08.931388770Z" level=info msg="Ensure that sandbox 7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc in task-service has been cleanup successfully" May 17 00:15:08.934837 kubelet[2670]: I0517 00:15:08.934782 2670 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" May 17 00:15:08.936669 containerd[1479]: time="2025-05-17T00:15:08.936565037Z" level=info msg="StopPodSandbox for \"86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418\"" May 17 00:15:08.936804 containerd[1479]: time="2025-05-17T00:15:08.936779919Z" level=info msg="Ensure that sandbox 86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418 in task-service has been cleanup successfully" May 17 00:15:08.940014 kubelet[2670]: I0517 00:15:08.939794 2670 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" May 17 00:15:08.940612 containerd[1479]: time="2025-05-17T00:15:08.940521658Z" level=info msg="StopPodSandbox for \"648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9\"" May 17 00:15:08.941321 containerd[1479]: time="2025-05-17T00:15:08.940745540Z" level=info msg="Ensure that sandbox 648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9 in task-service has been cleanup successfully" May 17 00:15:08.954444 kubelet[2670]: I0517 00:15:08.953015 2670 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" May 17 00:15:08.954568 containerd[1479]: time="2025-05-17T00:15:08.953597208Z" level=info msg="StopPodSandbox for \"b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7\"" May 17 00:15:08.954568 containerd[1479]: time="2025-05-17T00:15:08.953777169Z" level=info msg="Ensure that sandbox b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7 in task-service has been cleanup successfully" May 17 00:15:09.030184 containerd[1479]: time="2025-05-17T00:15:09.030034211Z" level=error msg="StopPodSandbox for \"d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9\" failed" error="failed to destroy network for sandbox \"d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:09.031326 kubelet[2670]: E0517 00:15:09.031231 2670 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" May 17 00:15:09.031326 kubelet[2670]: E0517 00:15:09.031298 2670 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9"} May 17 00:15:09.031515 kubelet[2670]: E0517 00:15:09.031360 2670 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bf41328e-d1ed-475d-9a4a-c70bc9451b6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:15:09.031515 kubelet[2670]: E0517 00:15:09.031381 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bf41328e-d1ed-475d-9a4a-c70bc9451b6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fhb48" podUID="bf41328e-d1ed-475d-9a4a-c70bc9451b6f" May 17 00:15:09.039077 containerd[1479]: time="2025-05-17T00:15:09.039000138Z" level=error msg="StopPodSandbox for \"86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418\" failed" error="failed to destroy network for sandbox \"86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:09.039264 kubelet[2670]: E0517 00:15:09.039225 2670 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" May 17 00:15:09.039323 kubelet[2670]: E0517 00:15:09.039272 2670 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418"} May 17 00:15:09.039323 kubelet[2670]: E0517 00:15:09.039307 2670 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ceb2f628-4f33-47aa-8305-d46713261d40\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:15:09.039410 kubelet[2670]: E0517 00:15:09.039328 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ceb2f628-4f33-47aa-8305-d46713261d40\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cbf8c7948-hk4z6" podUID="ceb2f628-4f33-47aa-8305-d46713261d40" May 17 00:15:09.045095 containerd[1479]: time="2025-05-17T00:15:09.045046370Z" level=error msg="StopPodSandbox for \"b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7\" failed" error="failed to destroy network for sandbox \"b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:09.045523 kubelet[2670]: E0517 00:15:09.045448 2670 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" May 17 00:15:09.045797 kubelet[2670]: E0517 00:15:09.045511 2670 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7"} May 17 00:15:09.045873 kubelet[2670]: E0517 00:15:09.045838 2670 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2e627f94-1f93-4753-9c4f-f74ed6f6b9da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:15:09.045950 kubelet[2670]: E0517 00:15:09.045883 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2e627f94-1f93-4753-9c4f-f74ed6f6b9da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-66cc8f5467-bmsjq" podUID="2e627f94-1f93-4753-9c4f-f74ed6f6b9da" May 17 00:15:09.057243 containerd[1479]: time="2025-05-17T00:15:09.057105912Z" level=error msg="StopPodSandbox for \"e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c\" failed" error="failed to destroy network for sandbox \"e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:09.057421 kubelet[2670]: E0517 00:15:09.057332 2670 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" May 17 00:15:09.057421 kubelet[2670]: E0517 00:15:09.057382 2670 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c"} May 17 00:15:09.057421 kubelet[2670]: E0517 00:15:09.057414 2670 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a9b260fc-ff83-4de9-ac43-723c22c032c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:15:09.057616 kubelet[2670]: E0517 00:15:09.057450 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a9b260fc-ff83-4de9-ac43-723c22c032c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-fqppr" podUID="a9b260fc-ff83-4de9-ac43-723c22c032c2" May 17 00:15:09.062737 containerd[1479]: time="2025-05-17T00:15:09.062678901Z" level=error msg="StopPodSandbox for \"7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc\" failed" error="failed to destroy network for sandbox \"7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:09.063047 containerd[1479]: time="2025-05-17T00:15:09.062729942Z" level=error msg="StopPodSandbox for \"648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9\" failed" error="failed to destroy network for sandbox \"648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:09.063368 kubelet[2670]: E0517 00:15:09.063237 2670 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" May 17 00:15:09.063368 kubelet[2670]: E0517 00:15:09.063304 2670 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc"} May 17 00:15:09.063368 kubelet[2670]: E0517 00:15:09.063293 2670 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" May 17 00:15:09.063368 kubelet[2670]: E0517 00:15:09.063339 2670 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"753684c6-cd41-4791-9ed7-725f4728c2a4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:15:09.063678 kubelet[2670]: E0517 00:15:09.063367 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"753684c6-cd41-4791-9ed7-725f4728c2a4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cbf8c7948-6pm57" podUID="753684c6-cd41-4791-9ed7-725f4728c2a4" May 17 00:15:09.063678 kubelet[2670]: E0517 00:15:09.063338 2670 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9"} May 17 00:15:09.063678 kubelet[2670]: E0517 00:15:09.063403 2670 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8165c08e-7c9f-40c3-8125-d662038241a2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:15:09.063678 kubelet[2670]: E0517 00:15:09.063421 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8165c08e-7c9f-40c3-8125-d662038241a2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d6599b8b4-52gm9" podUID="8165c08e-7c9f-40c3-8125-d662038241a2" May 17 00:15:09.575336 containerd[1479]: time="2025-05-17T00:15:09.575272733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n4blv,Uid:b30848da-8b46-4cdc-baaa-f3567b6377c3,Namespace:kube-system,Attempt:0,}" May 17 00:15:09.590762 containerd[1479]: time="2025-05-17T00:15:09.590711213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wpn2m,Uid:a7794f9e-6b8e-4656-8525-16c2f94584b5,Namespace:kube-system,Attempt:0,}" May 17 00:15:09.750802 containerd[1479]: time="2025-05-17T00:15:09.750668207Z" level=error msg="Failed to destroy network for sandbox \"37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:09.753183 containerd[1479]: time="2025-05-17T00:15:09.751783852Z" level=error msg="encountered an error cleaning up failed sandbox \"37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:09.753183 containerd[1479]: time="2025-05-17T00:15:09.751855613Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n4blv,Uid:b30848da-8b46-4cdc-baaa-f3567b6377c3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:09.753340 kubelet[2670]: E0517 00:15:09.752085 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:09.753340 kubelet[2670]: E0517 00:15:09.752158 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-n4blv" May 17 00:15:09.753340 kubelet[2670]: E0517 00:15:09.752176 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-n4blv" May 17 00:15:09.753495 kubelet[2670]: E0517 00:15:09.752216 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-n4blv_kube-system(b30848da-8b46-4cdc-baaa-f3567b6377c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-n4blv_kube-system(b30848da-8b46-4cdc-baaa-f3567b6377c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-n4blv" podUID="b30848da-8b46-4cdc-baaa-f3567b6377c3" May 17 00:15:09.777167 containerd[1479]: time="2025-05-17T00:15:09.776804743Z" level=error msg="Failed to destroy network for sandbox \"d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:09.777801 containerd[1479]: time="2025-05-17T00:15:09.777752948Z" level=error msg="encountered an error cleaning up failed sandbox \"d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:09.777898 containerd[1479]: time="2025-05-17T00:15:09.777819148Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wpn2m,Uid:a7794f9e-6b8e-4656-8525-16c2f94584b5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:09.778062 kubelet[2670]: E0517 00:15:09.778023 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:09.778110 kubelet[2670]: E0517 00:15:09.778086 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wpn2m" May 17 00:15:09.778137 kubelet[2670]: E0517 00:15:09.778106 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wpn2m" May 17 00:15:09.778168 kubelet[2670]: E0517 00:15:09.778144 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wpn2m_kube-system(a7794f9e-6b8e-4656-8525-16c2f94584b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wpn2m_kube-system(a7794f9e-6b8e-4656-8525-16c2f94584b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wpn2m" podUID="a7794f9e-6b8e-4656-8525-16c2f94584b5" May 17 00:15:09.958816 kubelet[2670]: I0517 00:15:09.957776 2670 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" May 17 00:15:09.960768 containerd[1479]: time="2025-05-17T00:15:09.959648696Z" level=info msg="StopPodSandbox for \"37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a\"" May 17 00:15:09.960768 containerd[1479]: time="2025-05-17T00:15:09.960376659Z" level=info msg="Ensure that sandbox 37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a in task-service has been cleanup successfully" May 17 00:15:09.962312 kubelet[2670]: I0517 00:15:09.962183 2670 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" May 17 00:15:09.963272 containerd[1479]: time="2025-05-17T00:15:09.963200754Z" level=info msg="StopPodSandbox for \"d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e\"" May 17 00:15:09.966751 containerd[1479]: time="2025-05-17T00:15:09.964136879Z" level=info msg="Ensure that sandbox d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e in task-service has been cleanup successfully" May 17 00:15:09.968979 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a-shm.mount: Deactivated successfully. May 17 00:15:09.969079 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e-shm.mount: Deactivated successfully. May 17 00:15:10.014654 containerd[1479]: time="2025-05-17T00:15:10.014602981Z" level=error msg="StopPodSandbox for \"37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a\" failed" error="failed to destroy network for sandbox \"37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:10.015730 kubelet[2670]: E0517 00:15:10.015357 2670 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" May 17 00:15:10.015730 kubelet[2670]: E0517 00:15:10.015635 2670 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a"} May 17 00:15:10.015730 kubelet[2670]: E0517 00:15:10.015674 2670 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b30848da-8b46-4cdc-baaa-f3567b6377c3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:15:10.015730 kubelet[2670]: E0517 00:15:10.015696 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b30848da-8b46-4cdc-baaa-f3567b6377c3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-n4blv" podUID="b30848da-8b46-4cdc-baaa-f3567b6377c3" May 17 00:15:10.023216 containerd[1479]: time="2025-05-17T00:15:10.023028944Z" level=error msg="StopPodSandbox for \"d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e\" failed" error="failed to destroy network for sandbox \"d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:15:10.024006 kubelet[2670]: E0517 00:15:10.023605 2670 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" May 17 00:15:10.024006 kubelet[2670]: E0517 00:15:10.023702 2670 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e"} May 17 00:15:10.024006 kubelet[2670]: E0517 00:15:10.023735 2670 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a7794f9e-6b8e-4656-8525-16c2f94584b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:15:10.024006 kubelet[2670]: E0517 00:15:10.023763 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a7794f9e-6b8e-4656-8525-16c2f94584b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wpn2m" podUID="a7794f9e-6b8e-4656-8525-16c2f94584b5" May 17 00:15:11.888042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1653603216.mount: Deactivated successfully. May 17 00:15:11.919270 containerd[1479]: time="2025-05-17T00:15:11.919203217Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:11.920632 containerd[1479]: time="2025-05-17T00:15:11.920373102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=150465379" May 17 00:15:11.924458 containerd[1479]: time="2025-05-17T00:15:11.922266912Z" level=info msg="ImageCreate event name:\"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:11.928048 containerd[1479]: time="2025-05-17T00:15:11.927983860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:11.928811 containerd[1479]: time="2025-05-17T00:15:11.928765384Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"150465241\" in 4.012427301s" May 17 00:15:11.928876 containerd[1479]: time="2025-05-17T00:15:11.928812105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\"" May 17 00:15:11.946938 containerd[1479]: time="2025-05-17T00:15:11.946897155Z" level=info msg="CreateContainer within sandbox \"f711b02169b7c380877771fbc92dbcc17d5638184a678134d8ef1643d8a338b2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 00:15:11.977446 containerd[1479]: time="2025-05-17T00:15:11.977354708Z" level=info msg="CreateContainer within sandbox \"f711b02169b7c380877771fbc92dbcc17d5638184a678134d8ef1643d8a338b2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ac27d7dab45509187bb6e9bd04117d39cfb8bb18c53082b1281bab95c3fc5614\"" May 17 00:15:11.981799 containerd[1479]: time="2025-05-17T00:15:11.978841075Z" level=info msg="StartContainer for \"ac27d7dab45509187bb6e9bd04117d39cfb8bb18c53082b1281bab95c3fc5614\"" May 17 00:15:12.010651 systemd[1]: Started cri-containerd-ac27d7dab45509187bb6e9bd04117d39cfb8bb18c53082b1281bab95c3fc5614.scope - libcontainer container ac27d7dab45509187bb6e9bd04117d39cfb8bb18c53082b1281bab95c3fc5614. May 17 00:15:12.047926 containerd[1479]: time="2025-05-17T00:15:12.047863336Z" level=info msg="StartContainer for \"ac27d7dab45509187bb6e9bd04117d39cfb8bb18c53082b1281bab95c3fc5614\" returns successfully" May 17 00:15:12.191499 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 00:15:12.191688 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 00:15:12.317180 containerd[1479]: time="2025-05-17T00:15:12.317114458Z" level=info msg="StopPodSandbox for \"b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7\"" May 17 00:15:12.511737 containerd[1479]: 2025-05-17 00:15:12.440 [INFO][3899] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" May 17 00:15:12.511737 containerd[1479]: 2025-05-17 00:15:12.440 [INFO][3899] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" iface="eth0" netns="/var/run/netns/cni-134feeab-0b3b-f481-42d2-c054b245c723" May 17 00:15:12.511737 containerd[1479]: 2025-05-17 00:15:12.440 [INFO][3899] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" iface="eth0" netns="/var/run/netns/cni-134feeab-0b3b-f481-42d2-c054b245c723" May 17 00:15:12.511737 containerd[1479]: 2025-05-17 00:15:12.441 [INFO][3899] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" iface="eth0" netns="/var/run/netns/cni-134feeab-0b3b-f481-42d2-c054b245c723" May 17 00:15:12.511737 containerd[1479]: 2025-05-17 00:15:12.441 [INFO][3899] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" May 17 00:15:12.511737 containerd[1479]: 2025-05-17 00:15:12.441 [INFO][3899] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" May 17 00:15:12.511737 containerd[1479]: 2025-05-17 00:15:12.490 [INFO][3912] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" HandleID="k8s-pod-network.b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" Workload="ci--4081--3--3--n--16326e39d6-k8s-whisker--66cc8f5467--bmsjq-eth0" May 17 00:15:12.511737 containerd[1479]: 2025-05-17 00:15:12.490 [INFO][3912] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:12.511737 containerd[1479]: 2025-05-17 00:15:12.490 [INFO][3912] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:12.511737 containerd[1479]: 2025-05-17 00:15:12.504 [WARNING][3912] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" HandleID="k8s-pod-network.b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" Workload="ci--4081--3--3--n--16326e39d6-k8s-whisker--66cc8f5467--bmsjq-eth0" May 17 00:15:12.511737 containerd[1479]: 2025-05-17 00:15:12.504 [INFO][3912] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" HandleID="k8s-pod-network.b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" Workload="ci--4081--3--3--n--16326e39d6-k8s-whisker--66cc8f5467--bmsjq-eth0" May 17 00:15:12.511737 containerd[1479]: 2025-05-17 00:15:12.507 [INFO][3912] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:12.511737 containerd[1479]: 2025-05-17 00:15:12.509 [INFO][3899] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" May 17 00:15:12.512239 containerd[1479]: time="2025-05-17T00:15:12.511999455Z" level=info msg="TearDown network for sandbox \"b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7\" successfully" May 17 00:15:12.512239 containerd[1479]: time="2025-05-17T00:15:12.512037535Z" level=info msg="StopPodSandbox for \"b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7\" returns successfully" May 17 00:15:12.571843 kubelet[2670]: I0517 00:15:12.570785 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2e627f94-1f93-4753-9c4f-f74ed6f6b9da-whisker-backend-key-pair\") pod \"2e627f94-1f93-4753-9c4f-f74ed6f6b9da\" (UID: \"2e627f94-1f93-4753-9c4f-f74ed6f6b9da\") " May 17 00:15:12.571843 kubelet[2670]: I0517 00:15:12.570844 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e627f94-1f93-4753-9c4f-f74ed6f6b9da-whisker-ca-bundle\") pod \"2e627f94-1f93-4753-9c4f-f74ed6f6b9da\" (UID: \"2e627f94-1f93-4753-9c4f-f74ed6f6b9da\") " May 17 00:15:12.571843 kubelet[2670]: I0517 00:15:12.570883 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qfqj\" (UniqueName: \"kubernetes.io/projected/2e627f94-1f93-4753-9c4f-f74ed6f6b9da-kube-api-access-7qfqj\") pod \"2e627f94-1f93-4753-9c4f-f74ed6f6b9da\" (UID: \"2e627f94-1f93-4753-9c4f-f74ed6f6b9da\") " May 17 00:15:12.577373 kubelet[2670]: I0517 00:15:12.577058 2670 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e627f94-1f93-4753-9c4f-f74ed6f6b9da-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2e627f94-1f93-4753-9c4f-f74ed6f6b9da" (UID: "2e627f94-1f93-4753-9c4f-f74ed6f6b9da"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:15:12.577373 kubelet[2670]: I0517 00:15:12.577308 2670 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e627f94-1f93-4753-9c4f-f74ed6f6b9da-kube-api-access-7qfqj" (OuterVolumeSpecName: "kube-api-access-7qfqj") pod "2e627f94-1f93-4753-9c4f-f74ed6f6b9da" (UID: "2e627f94-1f93-4753-9c4f-f74ed6f6b9da"). InnerVolumeSpecName "kube-api-access-7qfqj". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:15:12.580650 kubelet[2670]: I0517 00:15:12.580602 2670 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e627f94-1f93-4753-9c4f-f74ed6f6b9da-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2e627f94-1f93-4753-9c4f-f74ed6f6b9da" (UID: "2e627f94-1f93-4753-9c4f-f74ed6f6b9da"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:15:12.671799 kubelet[2670]: I0517 00:15:12.671331 2670 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2e627f94-1f93-4753-9c4f-f74ed6f6b9da-whisker-backend-key-pair\") on node \"ci-4081-3-3-n-16326e39d6\" DevicePath \"\"" May 17 00:15:12.671799 kubelet[2670]: I0517 00:15:12.671370 2670 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e627f94-1f93-4753-9c4f-f74ed6f6b9da-whisker-ca-bundle\") on node \"ci-4081-3-3-n-16326e39d6\" DevicePath \"\"" May 17 00:15:12.671799 kubelet[2670]: I0517 00:15:12.671381 2670 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7qfqj\" (UniqueName: \"kubernetes.io/projected/2e627f94-1f93-4753-9c4f-f74ed6f6b9da-kube-api-access-7qfqj\") on node \"ci-4081-3-3-n-16326e39d6\" DevicePath \"\"" May 17 00:15:12.887301 systemd[1]: run-netns-cni\x2d134feeab\x2d0b3b\x2df481\x2d42d2\x2dc054b245c723.mount: Deactivated successfully. May 17 00:15:12.887770 systemd[1]: var-lib-kubelet-pods-2e627f94\x2d1f93\x2d4753\x2d9c4f\x2df74ed6f6b9da-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7qfqj.mount: Deactivated successfully. May 17 00:15:12.887932 systemd[1]: var-lib-kubelet-pods-2e627f94\x2d1f93\x2d4753\x2d9c4f\x2df74ed6f6b9da-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 17 00:15:12.980861 systemd[1]: Removed slice kubepods-besteffort-pod2e627f94_1f93_4753_9c4f_f74ed6f6b9da.slice - libcontainer container kubepods-besteffort-pod2e627f94_1f93_4753_9c4f_f74ed6f6b9da.slice. May 17 00:15:13.022808 kubelet[2670]: I0517 00:15:13.021759 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-42drg" podStartSLOduration=1.937583009 podStartE2EDuration="15.021732516s" podCreationTimestamp="2025-05-17 00:14:58 +0000 UTC" firstStartedPulling="2025-05-17 00:14:58.845738683 +0000 UTC m=+23.221965693" lastFinishedPulling="2025-05-17 00:15:11.92988815 +0000 UTC m=+36.306115200" observedRunningTime="2025-05-17 00:15:13.005252717 +0000 UTC m=+37.381479767" watchObservedRunningTime="2025-05-17 00:15:13.021732516 +0000 UTC m=+37.397959566" May 17 00:15:13.078999 kubelet[2670]: W0517 00:15:13.078732 2670 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ci-4081-3-3-n-16326e39d6" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-3-n-16326e39d6' and this object May 17 00:15:13.078999 kubelet[2670]: E0517 00:15:13.078779 2670 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ci-4081-3-3-n-16326e39d6\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-3-n-16326e39d6' and this object" logger="UnhandledError" May 17 00:15:13.078999 kubelet[2670]: I0517 00:15:13.078731 2670 status_manager.go:890] "Failed to get status for pod" podUID="ea4a179c-2064-482e-bd61-eeafaaf1f680" pod="calico-system/whisker-c64877bf5-xgbzp" err="pods \"whisker-c64877bf5-xgbzp\" is forbidden: User \"system:node:ci-4081-3-3-n-16326e39d6\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-3-n-16326e39d6' and this object" May 17 00:15:13.082912 systemd[1]: Created slice kubepods-besteffort-podea4a179c_2064_482e_bd61_eeafaaf1f680.slice - libcontainer container kubepods-besteffort-podea4a179c_2064_482e_bd61_eeafaaf1f680.slice. May 17 00:15:13.173916 kubelet[2670]: I0517 00:15:13.173496 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf456\" (UniqueName: \"kubernetes.io/projected/ea4a179c-2064-482e-bd61-eeafaaf1f680-kube-api-access-nf456\") pod \"whisker-c64877bf5-xgbzp\" (UID: \"ea4a179c-2064-482e-bd61-eeafaaf1f680\") " pod="calico-system/whisker-c64877bf5-xgbzp" May 17 00:15:13.173916 kubelet[2670]: I0517 00:15:13.173624 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea4a179c-2064-482e-bd61-eeafaaf1f680-whisker-ca-bundle\") pod \"whisker-c64877bf5-xgbzp\" (UID: \"ea4a179c-2064-482e-bd61-eeafaaf1f680\") " pod="calico-system/whisker-c64877bf5-xgbzp" May 17 00:15:13.173916 kubelet[2670]: I0517 00:15:13.173661 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ea4a179c-2064-482e-bd61-eeafaaf1f680-whisker-backend-key-pair\") pod \"whisker-c64877bf5-xgbzp\" (UID: \"ea4a179c-2064-482e-bd61-eeafaaf1f680\") " pod="calico-system/whisker-c64877bf5-xgbzp" May 17 00:15:13.757928 kubelet[2670]: I0517 00:15:13.757870 2670 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e627f94-1f93-4753-9c4f-f74ed6f6b9da" path="/var/lib/kubelet/pods/2e627f94-1f93-4753-9c4f-f74ed6f6b9da/volumes" May 17 00:15:13.977031 kubelet[2670]: I0517 00:15:13.976213 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:15:14.274875 kubelet[2670]: E0517 00:15:14.274760 2670 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition May 17 00:15:14.275087 kubelet[2670]: E0517 00:15:14.274913 2670 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea4a179c-2064-482e-bd61-eeafaaf1f680-whisker-backend-key-pair podName:ea4a179c-2064-482e-bd61-eeafaaf1f680 nodeName:}" failed. No retries permitted until 2025-05-17 00:15:14.774883247 +0000 UTC m=+39.151110337 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/ea4a179c-2064-482e-bd61-eeafaaf1f680-whisker-backend-key-pair") pod "whisker-c64877bf5-xgbzp" (UID: "ea4a179c-2064-482e-bd61-eeafaaf1f680") : failed to sync secret cache: timed out waiting for the condition May 17 00:15:14.889712 containerd[1479]: time="2025-05-17T00:15:14.889419950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c64877bf5-xgbzp,Uid:ea4a179c-2064-482e-bd61-eeafaaf1f680,Namespace:calico-system,Attempt:0,}" May 17 00:15:15.059370 systemd-networkd[1369]: cali14d5cb852e1: Link UP May 17 00:15:15.059965 systemd-networkd[1369]: cali14d5cb852e1: Gained carrier May 17 00:15:15.088204 containerd[1479]: 2025-05-17 00:15:14.941 [INFO][4028] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:15:15.088204 containerd[1479]: 2025-05-17 00:15:14.961 [INFO][4028] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--16326e39d6-k8s-whisker--c64877bf5--xgbzp-eth0 whisker-c64877bf5- calico-system ea4a179c-2064-482e-bd61-eeafaaf1f680 895 0 2025-05-17 00:15:13 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:c64877bf5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-3-n-16326e39d6 whisker-c64877bf5-xgbzp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali14d5cb852e1 [] [] }} ContainerID="362b9471725017e93b9ea708bff004c515a5b726631ebd5ecee3554e979b2a71" Namespace="calico-system" Pod="whisker-c64877bf5-xgbzp" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-whisker--c64877bf5--xgbzp-" May 17 00:15:15.088204 containerd[1479]: 2025-05-17 00:15:14.961 [INFO][4028] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="362b9471725017e93b9ea708bff004c515a5b726631ebd5ecee3554e979b2a71" Namespace="calico-system" Pod="whisker-c64877bf5-xgbzp" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-whisker--c64877bf5--xgbzp-eth0" May 17 00:15:15.088204 containerd[1479]: 2025-05-17 00:15:14.993 [INFO][4041] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="362b9471725017e93b9ea708bff004c515a5b726631ebd5ecee3554e979b2a71" HandleID="k8s-pod-network.362b9471725017e93b9ea708bff004c515a5b726631ebd5ecee3554e979b2a71" Workload="ci--4081--3--3--n--16326e39d6-k8s-whisker--c64877bf5--xgbzp-eth0" May 17 00:15:15.088204 containerd[1479]: 2025-05-17 00:15:14.993 [INFO][4041] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="362b9471725017e93b9ea708bff004c515a5b726631ebd5ecee3554e979b2a71" HandleID="k8s-pod-network.362b9471725017e93b9ea708bff004c515a5b726631ebd5ecee3554e979b2a71" Workload="ci--4081--3--3--n--16326e39d6-k8s-whisker--c64877bf5--xgbzp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400022f1a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-16326e39d6", "pod":"whisker-c64877bf5-xgbzp", "timestamp":"2025-05-17 00:15:14.993572283 +0000 UTC"}, Hostname:"ci-4081-3-3-n-16326e39d6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:15:15.088204 containerd[1479]: 2025-05-17 00:15:14.993 [INFO][4041] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:15.088204 containerd[1479]: 2025-05-17 00:15:14.993 [INFO][4041] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:15.088204 containerd[1479]: 2025-05-17 00:15:14.993 [INFO][4041] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-16326e39d6' May 17 00:15:15.088204 containerd[1479]: 2025-05-17 00:15:15.009 [INFO][4041] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.362b9471725017e93b9ea708bff004c515a5b726631ebd5ecee3554e979b2a71" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:15.088204 containerd[1479]: 2025-05-17 00:15:15.017 [INFO][4041] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-16326e39d6" May 17 00:15:15.088204 containerd[1479]: 2025-05-17 00:15:15.024 [INFO][4041] ipam/ipam.go 511: Trying affinity for 192.168.81.128/26 host="ci-4081-3-3-n-16326e39d6" May 17 00:15:15.088204 containerd[1479]: 2025-05-17 00:15:15.027 [INFO][4041] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.128/26 host="ci-4081-3-3-n-16326e39d6" May 17 00:15:15.088204 containerd[1479]: 2025-05-17 00:15:15.030 [INFO][4041] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.128/26 host="ci-4081-3-3-n-16326e39d6" May 17 00:15:15.088204 containerd[1479]: 2025-05-17 00:15:15.031 [INFO][4041] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.128/26 handle="k8s-pod-network.362b9471725017e93b9ea708bff004c515a5b726631ebd5ecee3554e979b2a71" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:15.088204 containerd[1479]: 2025-05-17 00:15:15.034 [INFO][4041] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.362b9471725017e93b9ea708bff004c515a5b726631ebd5ecee3554e979b2a71 May 17 00:15:15.088204 containerd[1479]: 2025-05-17 00:15:15.039 [INFO][4041] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.128/26 handle="k8s-pod-network.362b9471725017e93b9ea708bff004c515a5b726631ebd5ecee3554e979b2a71" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:15.088204 containerd[1479]: 2025-05-17 00:15:15.047 [INFO][4041] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.129/26] block=192.168.81.128/26 handle="k8s-pod-network.362b9471725017e93b9ea708bff004c515a5b726631ebd5ecee3554e979b2a71" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:15.088204 containerd[1479]: 2025-05-17 00:15:15.047 [INFO][4041] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.129/26] handle="k8s-pod-network.362b9471725017e93b9ea708bff004c515a5b726631ebd5ecee3554e979b2a71" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:15.088204 containerd[1479]: 2025-05-17 00:15:15.047 [INFO][4041] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:15.088204 containerd[1479]: 2025-05-17 00:15:15.047 [INFO][4041] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.129/26] IPv6=[] ContainerID="362b9471725017e93b9ea708bff004c515a5b726631ebd5ecee3554e979b2a71" HandleID="k8s-pod-network.362b9471725017e93b9ea708bff004c515a5b726631ebd5ecee3554e979b2a71" Workload="ci--4081--3--3--n--16326e39d6-k8s-whisker--c64877bf5--xgbzp-eth0" May 17 00:15:15.089130 containerd[1479]: 2025-05-17 00:15:15.049 [INFO][4028] cni-plugin/k8s.go 418: Populated endpoint ContainerID="362b9471725017e93b9ea708bff004c515a5b726631ebd5ecee3554e979b2a71" Namespace="calico-system" Pod="whisker-c64877bf5-xgbzp" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-whisker--c64877bf5--xgbzp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--16326e39d6-k8s-whisker--c64877bf5--xgbzp-eth0", GenerateName:"whisker-c64877bf5-", Namespace:"calico-system", SelfLink:"", UID:"ea4a179c-2064-482e-bd61-eeafaaf1f680", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 15, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c64877bf5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-16326e39d6", ContainerID:"", Pod:"whisker-c64877bf5-xgbzp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.81.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali14d5cb852e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:15.089130 containerd[1479]: 2025-05-17 00:15:15.049 [INFO][4028] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.129/32] ContainerID="362b9471725017e93b9ea708bff004c515a5b726631ebd5ecee3554e979b2a71" Namespace="calico-system" Pod="whisker-c64877bf5-xgbzp" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-whisker--c64877bf5--xgbzp-eth0" May 17 00:15:15.089130 containerd[1479]: 2025-05-17 00:15:15.049 [INFO][4028] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali14d5cb852e1 ContainerID="362b9471725017e93b9ea708bff004c515a5b726631ebd5ecee3554e979b2a71" Namespace="calico-system" Pod="whisker-c64877bf5-xgbzp" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-whisker--c64877bf5--xgbzp-eth0" May 17 00:15:15.089130 containerd[1479]: 2025-05-17 00:15:15.063 [INFO][4028] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="362b9471725017e93b9ea708bff004c515a5b726631ebd5ecee3554e979b2a71" Namespace="calico-system" Pod="whisker-c64877bf5-xgbzp" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-whisker--c64877bf5--xgbzp-eth0" May 17 00:15:15.089130 containerd[1479]: 2025-05-17 00:15:15.063 [INFO][4028] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="362b9471725017e93b9ea708bff004c515a5b726631ebd5ecee3554e979b2a71" Namespace="calico-system" Pod="whisker-c64877bf5-xgbzp" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-whisker--c64877bf5--xgbzp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--16326e39d6-k8s-whisker--c64877bf5--xgbzp-eth0", GenerateName:"whisker-c64877bf5-", Namespace:"calico-system", SelfLink:"", UID:"ea4a179c-2064-482e-bd61-eeafaaf1f680", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 15, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c64877bf5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-16326e39d6", ContainerID:"362b9471725017e93b9ea708bff004c515a5b726631ebd5ecee3554e979b2a71", Pod:"whisker-c64877bf5-xgbzp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.81.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali14d5cb852e1", MAC:"ca:39:83:54:c1:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:15.089130 containerd[1479]: 2025-05-17 00:15:15.085 [INFO][4028] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="362b9471725017e93b9ea708bff004c515a5b726631ebd5ecee3554e979b2a71" Namespace="calico-system" Pod="whisker-c64877bf5-xgbzp" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-whisker--c64877bf5--xgbzp-eth0" May 17 00:15:15.118622 containerd[1479]: time="2025-05-17T00:15:15.118383062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:15:15.119243 containerd[1479]: time="2025-05-17T00:15:15.119032745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:15:15.119243 containerd[1479]: time="2025-05-17T00:15:15.119061025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:15:15.119243 containerd[1479]: time="2025-05-17T00:15:15.119191146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:15:15.158841 systemd[1]: Started cri-containerd-362b9471725017e93b9ea708bff004c515a5b726631ebd5ecee3554e979b2a71.scope - libcontainer container 362b9471725017e93b9ea708bff004c515a5b726631ebd5ecee3554e979b2a71. May 17 00:15:15.240109 containerd[1479]: time="2025-05-17T00:15:15.240065706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c64877bf5-xgbzp,Uid:ea4a179c-2064-482e-bd61-eeafaaf1f680,Namespace:calico-system,Attempt:0,} returns sandbox id \"362b9471725017e93b9ea708bff004c515a5b726631ebd5ecee3554e979b2a71\"" May 17 00:15:15.245296 containerd[1479]: time="2025-05-17T00:15:15.244887568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:15:15.492252 containerd[1479]: time="2025-05-17T00:15:15.491745553Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:15:15.494014 containerd[1479]: time="2025-05-17T00:15:15.493847603Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:15:15.494014 containerd[1479]: time="2025-05-17T00:15:15.493991004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:15:15.494352 kubelet[2670]: E0517 00:15:15.494262 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:15:15.494352 kubelet[2670]: E0517 00:15:15.494332 2670 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:15:15.500693 kubelet[2670]: E0517 00:15:15.500534 2670 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ecfce7dcd79642e9a67dfb965e76b411,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nf456,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c64877bf5-xgbzp_calico-system(ea4a179c-2064-482e-bd61-eeafaaf1f680): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:15:15.511326 containerd[1479]: time="2025-05-17T00:15:15.510934282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:15:15.585976 kubelet[2670]: I0517 00:15:15.585657 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:15:15.746176 containerd[1479]: time="2025-05-17T00:15:15.746017692Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:15:15.747513 containerd[1479]: time="2025-05-17T00:15:15.747419819Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:15:15.747677 containerd[1479]: time="2025-05-17T00:15:15.747592659Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:15:15.749978 kubelet[2670]: E0517 00:15:15.749383 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:15:15.749978 kubelet[2670]: E0517 00:15:15.749501 2670 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:15:15.750197 kubelet[2670]: E0517 00:15:15.749666 2670 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nf456,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c64877bf5-xgbzp_calico-system(ea4a179c-2064-482e-bd61-eeafaaf1f680): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:15:15.750922 kubelet[2670]: E0517 00:15:15.750879 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-c64877bf5-xgbzp" podUID="ea4a179c-2064-482e-bd61-eeafaaf1f680" May 17 00:15:15.796623 systemd[1]: run-containerd-runc-k8s.io-362b9471725017e93b9ea708bff004c515a5b726631ebd5ecee3554e979b2a71-runc.4q00HH.mount: Deactivated successfully. May 17 00:15:15.984981 kubelet[2670]: E0517 00:15:15.984918 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-c64877bf5-xgbzp" podUID="ea4a179c-2064-482e-bd61-eeafaaf1f680" May 17 00:15:16.132489 kernel: bpftool[4138]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 17 00:15:16.205687 systemd-networkd[1369]: cali14d5cb852e1: Gained IPv6LL May 17 00:15:16.363855 systemd-networkd[1369]: vxlan.calico: Link UP May 17 00:15:16.363865 systemd-networkd[1369]: vxlan.calico: Gained carrier May 17 00:15:16.992878 kubelet[2670]: E0517 00:15:16.991609 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-c64877bf5-xgbzp" podUID="ea4a179c-2064-482e-bd61-eeafaaf1f680" May 17 00:15:17.742991 systemd-networkd[1369]: vxlan.calico: Gained IPv6LL May 17 00:15:20.756288 containerd[1479]: time="2025-05-17T00:15:20.754747305Z" level=info msg="StopPodSandbox for \"7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc\"" May 17 00:15:20.859194 containerd[1479]: 2025-05-17 00:15:20.818 [INFO][4266] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" May 17 00:15:20.859194 containerd[1479]: 2025-05-17 00:15:20.818 [INFO][4266] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" iface="eth0" netns="/var/run/netns/cni-db7e8fed-fd97-6bd4-f6da-172e7cd7f896" May 17 00:15:20.859194 containerd[1479]: 2025-05-17 00:15:20.818 [INFO][4266] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" iface="eth0" netns="/var/run/netns/cni-db7e8fed-fd97-6bd4-f6da-172e7cd7f896" May 17 00:15:20.859194 containerd[1479]: 2025-05-17 00:15:20.818 [INFO][4266] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" iface="eth0" netns="/var/run/netns/cni-db7e8fed-fd97-6bd4-f6da-172e7cd7f896" May 17 00:15:20.859194 containerd[1479]: 2025-05-17 00:15:20.818 [INFO][4266] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" May 17 00:15:20.859194 containerd[1479]: 2025-05-17 00:15:20.818 [INFO][4266] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" May 17 00:15:20.859194 containerd[1479]: 2025-05-17 00:15:20.842 [INFO][4273] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" HandleID="k8s-pod-network.7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--6pm57-eth0" May 17 00:15:20.859194 containerd[1479]: 2025-05-17 00:15:20.842 [INFO][4273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:20.859194 containerd[1479]: 2025-05-17 00:15:20.842 [INFO][4273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:20.859194 containerd[1479]: 2025-05-17 00:15:20.852 [WARNING][4273] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" HandleID="k8s-pod-network.7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--6pm57-eth0" May 17 00:15:20.859194 containerd[1479]: 2025-05-17 00:15:20.852 [INFO][4273] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" HandleID="k8s-pod-network.7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--6pm57-eth0" May 17 00:15:20.859194 containerd[1479]: 2025-05-17 00:15:20.855 [INFO][4273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:20.859194 containerd[1479]: 2025-05-17 00:15:20.857 [INFO][4266] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" May 17 00:15:20.860654 containerd[1479]: time="2025-05-17T00:15:20.860018791Z" level=info msg="TearDown network for sandbox \"7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc\" successfully" May 17 00:15:20.860654 containerd[1479]: time="2025-05-17T00:15:20.860067351Z" level=info msg="StopPodSandbox for \"7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc\" returns successfully" May 17 00:15:20.863416 containerd[1479]: time="2025-05-17T00:15:20.863375045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbf8c7948-6pm57,Uid:753684c6-cd41-4791-9ed7-725f4728c2a4,Namespace:calico-apiserver,Attempt:1,}" May 17 00:15:20.864629 systemd[1]: run-netns-cni\x2ddb7e8fed\x2dfd97\x2d6bd4\x2df6da\x2d172e7cd7f896.mount: Deactivated successfully. May 17 00:15:21.020720 systemd-networkd[1369]: cali97bc9f24b57: Link UP May 17 00:15:21.021732 systemd-networkd[1369]: cali97bc9f24b57: Gained carrier May 17 00:15:21.042991 containerd[1479]: 2025-05-17 00:15:20.930 [INFO][4281] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--6pm57-eth0 calico-apiserver-6cbf8c7948- calico-apiserver 753684c6-cd41-4791-9ed7-725f4728c2a4 949 0 2025-05-17 00:14:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cbf8c7948 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-n-16326e39d6 calico-apiserver-6cbf8c7948-6pm57 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali97bc9f24b57 [] [] }} ContainerID="fd491bd53711038e437cc1447d10887e0ae489d15c94fc55a6255dae964dd001" Namespace="calico-apiserver" Pod="calico-apiserver-6cbf8c7948-6pm57" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--6pm57-" May 17 00:15:21.042991 containerd[1479]: 2025-05-17 00:15:20.930 [INFO][4281] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fd491bd53711038e437cc1447d10887e0ae489d15c94fc55a6255dae964dd001" Namespace="calico-apiserver" Pod="calico-apiserver-6cbf8c7948-6pm57" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--6pm57-eth0" May 17 00:15:21.042991 containerd[1479]: 2025-05-17 00:15:20.961 [INFO][4294] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd491bd53711038e437cc1447d10887e0ae489d15c94fc55a6255dae964dd001" HandleID="k8s-pod-network.fd491bd53711038e437cc1447d10887e0ae489d15c94fc55a6255dae964dd001" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--6pm57-eth0" May 17 00:15:21.042991 containerd[1479]: 2025-05-17 00:15:20.961 [INFO][4294] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fd491bd53711038e437cc1447d10887e0ae489d15c94fc55a6255dae964dd001" HandleID="k8s-pod-network.fd491bd53711038e437cc1447d10887e0ae489d15c94fc55a6255dae964dd001" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--6pm57-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d7020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-n-16326e39d6", "pod":"calico-apiserver-6cbf8c7948-6pm57", "timestamp":"2025-05-17 00:15:20.96136102 +0000 UTC"}, Hostname:"ci-4081-3-3-n-16326e39d6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:15:21.042991 containerd[1479]: 2025-05-17 00:15:20.961 [INFO][4294] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:21.042991 containerd[1479]: 2025-05-17 00:15:20.961 [INFO][4294] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:21.042991 containerd[1479]: 2025-05-17 00:15:20.961 [INFO][4294] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-16326e39d6' May 17 00:15:21.042991 containerd[1479]: 2025-05-17 00:15:20.973 [INFO][4294] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fd491bd53711038e437cc1447d10887e0ae489d15c94fc55a6255dae964dd001" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:21.042991 containerd[1479]: 2025-05-17 00:15:20.979 [INFO][4294] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-16326e39d6" May 17 00:15:21.042991 containerd[1479]: 2025-05-17 00:15:20.985 [INFO][4294] ipam/ipam.go 511: Trying affinity for 192.168.81.128/26 host="ci-4081-3-3-n-16326e39d6" May 17 00:15:21.042991 containerd[1479]: 2025-05-17 00:15:20.988 [INFO][4294] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.128/26 host="ci-4081-3-3-n-16326e39d6" May 17 00:15:21.042991 containerd[1479]: 2025-05-17 00:15:20.992 [INFO][4294] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.128/26 host="ci-4081-3-3-n-16326e39d6" May 17 00:15:21.042991 containerd[1479]: 2025-05-17 00:15:20.992 [INFO][4294] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.128/26 handle="k8s-pod-network.fd491bd53711038e437cc1447d10887e0ae489d15c94fc55a6255dae964dd001" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:21.042991 containerd[1479]: 2025-05-17 00:15:20.994 [INFO][4294] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fd491bd53711038e437cc1447d10887e0ae489d15c94fc55a6255dae964dd001 May 17 00:15:21.042991 containerd[1479]: 2025-05-17 00:15:21.000 [INFO][4294] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.128/26 handle="k8s-pod-network.fd491bd53711038e437cc1447d10887e0ae489d15c94fc55a6255dae964dd001" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:21.042991 containerd[1479]: 2025-05-17 00:15:21.011 [INFO][4294] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.130/26] block=192.168.81.128/26 handle="k8s-pod-network.fd491bd53711038e437cc1447d10887e0ae489d15c94fc55a6255dae964dd001" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:21.042991 containerd[1479]: 2025-05-17 00:15:21.011 [INFO][4294] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.130/26] handle="k8s-pod-network.fd491bd53711038e437cc1447d10887e0ae489d15c94fc55a6255dae964dd001" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:21.042991 containerd[1479]: 2025-05-17 00:15:21.011 [INFO][4294] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:21.042991 containerd[1479]: 2025-05-17 00:15:21.011 [INFO][4294] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.130/26] IPv6=[] ContainerID="fd491bd53711038e437cc1447d10887e0ae489d15c94fc55a6255dae964dd001" HandleID="k8s-pod-network.fd491bd53711038e437cc1447d10887e0ae489d15c94fc55a6255dae964dd001" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--6pm57-eth0" May 17 00:15:21.043943 containerd[1479]: 2025-05-17 00:15:21.014 [INFO][4281] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fd491bd53711038e437cc1447d10887e0ae489d15c94fc55a6255dae964dd001" Namespace="calico-apiserver" Pod="calico-apiserver-6cbf8c7948-6pm57" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--6pm57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--6pm57-eth0", GenerateName:"calico-apiserver-6cbf8c7948-", Namespace:"calico-apiserver", SelfLink:"", UID:"753684c6-cd41-4791-9ed7-725f4728c2a4", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cbf8c7948", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-16326e39d6", ContainerID:"", Pod:"calico-apiserver-6cbf8c7948-6pm57", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali97bc9f24b57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:21.043943 containerd[1479]: 2025-05-17 00:15:21.015 [INFO][4281] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.130/32] ContainerID="fd491bd53711038e437cc1447d10887e0ae489d15c94fc55a6255dae964dd001" Namespace="calico-apiserver" Pod="calico-apiserver-6cbf8c7948-6pm57" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--6pm57-eth0" May 17 00:15:21.043943 containerd[1479]: 2025-05-17 00:15:21.015 [INFO][4281] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali97bc9f24b57 ContainerID="fd491bd53711038e437cc1447d10887e0ae489d15c94fc55a6255dae964dd001" Namespace="calico-apiserver" Pod="calico-apiserver-6cbf8c7948-6pm57" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--6pm57-eth0" May 17 00:15:21.043943 containerd[1479]: 2025-05-17 00:15:21.023 [INFO][4281] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd491bd53711038e437cc1447d10887e0ae489d15c94fc55a6255dae964dd001" Namespace="calico-apiserver" Pod="calico-apiserver-6cbf8c7948-6pm57" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--6pm57-eth0" May 17 00:15:21.043943 containerd[1479]: 2025-05-17 00:15:21.023 [INFO][4281] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fd491bd53711038e437cc1447d10887e0ae489d15c94fc55a6255dae964dd001" Namespace="calico-apiserver" Pod="calico-apiserver-6cbf8c7948-6pm57" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--6pm57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--6pm57-eth0", GenerateName:"calico-apiserver-6cbf8c7948-", Namespace:"calico-apiserver", SelfLink:"", UID:"753684c6-cd41-4791-9ed7-725f4728c2a4", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cbf8c7948", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-16326e39d6", ContainerID:"fd491bd53711038e437cc1447d10887e0ae489d15c94fc55a6255dae964dd001", Pod:"calico-apiserver-6cbf8c7948-6pm57", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali97bc9f24b57", MAC:"c6:45:3f:71:94:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:21.043943 containerd[1479]: 2025-05-17 00:15:21.033 [INFO][4281] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fd491bd53711038e437cc1447d10887e0ae489d15c94fc55a6255dae964dd001" Namespace="calico-apiserver" Pod="calico-apiserver-6cbf8c7948-6pm57" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--6pm57-eth0" May 17 00:15:21.064614 containerd[1479]: time="2025-05-17T00:15:21.064496052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:15:21.064614 containerd[1479]: time="2025-05-17T00:15:21.064567092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:15:21.064614 containerd[1479]: time="2025-05-17T00:15:21.064592972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:15:21.064979 containerd[1479]: time="2025-05-17T00:15:21.064689173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:15:21.094695 systemd[1]: Started cri-containerd-fd491bd53711038e437cc1447d10887e0ae489d15c94fc55a6255dae964dd001.scope - libcontainer container fd491bd53711038e437cc1447d10887e0ae489d15c94fc55a6255dae964dd001. May 17 00:15:21.138179 containerd[1479]: time="2025-05-17T00:15:21.138110678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbf8c7948-6pm57,Uid:753684c6-cd41-4791-9ed7-725f4728c2a4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"fd491bd53711038e437cc1447d10887e0ae489d15c94fc55a6255dae964dd001\"" May 17 00:15:21.141799 containerd[1479]: time="2025-05-17T00:15:21.141753854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:15:21.757615 containerd[1479]: time="2025-05-17T00:15:21.757133135Z" level=info msg="StopPodSandbox for \"37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a\"" May 17 00:15:21.759037 containerd[1479]: time="2025-05-17T00:15:21.757486336Z" level=info msg="StopPodSandbox for \"d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9\"" May 17 00:15:21.918545 containerd[1479]: 2025-05-17 00:15:21.840 [INFO][4364] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" May 17 00:15:21.918545 containerd[1479]: 2025-05-17 00:15:21.843 [INFO][4364] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" iface="eth0" netns="/var/run/netns/cni-420ca508-98e1-ff5f-5438-55ced91ab606" May 17 00:15:21.918545 containerd[1479]: 2025-05-17 00:15:21.844 [INFO][4364] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" iface="eth0" netns="/var/run/netns/cni-420ca508-98e1-ff5f-5438-55ced91ab606" May 17 00:15:21.918545 containerd[1479]: 2025-05-17 00:15:21.844 [INFO][4364] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" iface="eth0" netns="/var/run/netns/cni-420ca508-98e1-ff5f-5438-55ced91ab606" May 17 00:15:21.918545 containerd[1479]: 2025-05-17 00:15:21.844 [INFO][4364] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" May 17 00:15:21.918545 containerd[1479]: 2025-05-17 00:15:21.844 [INFO][4364] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" May 17 00:15:21.918545 containerd[1479]: 2025-05-17 00:15:21.894 [INFO][4387] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" HandleID="k8s-pod-network.37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" Workload="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--n4blv-eth0" May 17 00:15:21.918545 containerd[1479]: 2025-05-17 00:15:21.895 [INFO][4387] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:21.918545 containerd[1479]: 2025-05-17 00:15:21.895 [INFO][4387] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:21.918545 containerd[1479]: 2025-05-17 00:15:21.910 [WARNING][4387] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" HandleID="k8s-pod-network.37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" Workload="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--n4blv-eth0" May 17 00:15:21.918545 containerd[1479]: 2025-05-17 00:15:21.910 [INFO][4387] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" HandleID="k8s-pod-network.37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" Workload="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--n4blv-eth0" May 17 00:15:21.918545 containerd[1479]: 2025-05-17 00:15:21.913 [INFO][4387] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:21.918545 containerd[1479]: 2025-05-17 00:15:21.916 [INFO][4364] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" May 17 00:15:21.922183 containerd[1479]: time="2025-05-17T00:15:21.920513575Z" level=info msg="TearDown network for sandbox \"37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a\" successfully" May 17 00:15:21.922183 containerd[1479]: time="2025-05-17T00:15:21.920629175Z" level=info msg="StopPodSandbox for \"37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a\" returns successfully" May 17 00:15:21.924802 containerd[1479]: time="2025-05-17T00:15:21.923075185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n4blv,Uid:b30848da-8b46-4cdc-baaa-f3567b6377c3,Namespace:kube-system,Attempt:1,}" May 17 00:15:21.924329 systemd[1]: run-netns-cni\x2d420ca508\x2d98e1\x2dff5f\x2d5438\x2d55ced91ab606.mount: Deactivated successfully. May 17 00:15:21.940184 containerd[1479]: 2025-05-17 00:15:21.853 [INFO][4373] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" May 17 00:15:21.940184 containerd[1479]: 2025-05-17 00:15:21.854 [INFO][4373] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" iface="eth0" netns="/var/run/netns/cni-0ad835d3-80c8-698a-a7f7-445f73136c78" May 17 00:15:21.940184 containerd[1479]: 2025-05-17 00:15:21.855 [INFO][4373] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" iface="eth0" netns="/var/run/netns/cni-0ad835d3-80c8-698a-a7f7-445f73136c78" May 17 00:15:21.940184 containerd[1479]: 2025-05-17 00:15:21.856 [INFO][4373] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" iface="eth0" netns="/var/run/netns/cni-0ad835d3-80c8-698a-a7f7-445f73136c78" May 17 00:15:21.940184 containerd[1479]: 2025-05-17 00:15:21.856 [INFO][4373] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" May 17 00:15:21.940184 containerd[1479]: 2025-05-17 00:15:21.856 [INFO][4373] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" May 17 00:15:21.940184 containerd[1479]: 2025-05-17 00:15:21.897 [INFO][4392] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" HandleID="k8s-pod-network.d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" Workload="ci--4081--3--3--n--16326e39d6-k8s-csi--node--driver--fhb48-eth0" May 17 00:15:21.940184 containerd[1479]: 2025-05-17 00:15:21.898 [INFO][4392] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:21.940184 containerd[1479]: 2025-05-17 00:15:21.913 [INFO][4392] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:21.940184 containerd[1479]: 2025-05-17 00:15:21.932 [WARNING][4392] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" HandleID="k8s-pod-network.d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" Workload="ci--4081--3--3--n--16326e39d6-k8s-csi--node--driver--fhb48-eth0" May 17 00:15:21.940184 containerd[1479]: 2025-05-17 00:15:21.933 [INFO][4392] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" HandleID="k8s-pod-network.d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" Workload="ci--4081--3--3--n--16326e39d6-k8s-csi--node--driver--fhb48-eth0" May 17 00:15:21.940184 containerd[1479]: 2025-05-17 00:15:21.936 [INFO][4392] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:21.940184 containerd[1479]: 2025-05-17 00:15:21.937 [INFO][4373] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" May 17 00:15:21.945521 containerd[1479]: time="2025-05-17T00:15:21.942609427Z" level=info msg="TearDown network for sandbox \"d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9\" successfully" May 17 00:15:21.945521 containerd[1479]: time="2025-05-17T00:15:21.942665547Z" level=info msg="StopPodSandbox for \"d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9\" returns successfully" May 17 00:15:21.944378 systemd[1]: run-netns-cni\x2d0ad835d3\x2d80c8\x2d698a\x2da7f7\x2d445f73136c78.mount: Deactivated successfully. May 17 00:15:21.947875 containerd[1479]: time="2025-05-17T00:15:21.947746928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fhb48,Uid:bf41328e-d1ed-475d-9a4a-c70bc9451b6f,Namespace:calico-system,Attempt:1,}" May 17 00:15:22.122491 systemd-networkd[1369]: cali53a1da8fa0f: Link UP May 17 00:15:22.124524 systemd-networkd[1369]: cali53a1da8fa0f: Gained carrier May 17 00:15:22.144128 containerd[1479]: 2025-05-17 00:15:22.011 [INFO][4400] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--n4blv-eth0 coredns-668d6bf9bc- kube-system b30848da-8b46-4cdc-baaa-f3567b6377c3 957 0 2025-05-17 00:14:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-n-16326e39d6 coredns-668d6bf9bc-n4blv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali53a1da8fa0f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f14f985d83d1befe7413123e16ac001ac95bbe341afb9adba5418bf9c4344204" Namespace="kube-system" Pod="coredns-668d6bf9bc-n4blv" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--n4blv-" May 17 00:15:22.144128 containerd[1479]: 2025-05-17 00:15:22.011 [INFO][4400] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f14f985d83d1befe7413123e16ac001ac95bbe341afb9adba5418bf9c4344204" Namespace="kube-system" Pod="coredns-668d6bf9bc-n4blv" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--n4blv-eth0" May 17 00:15:22.144128 containerd[1479]: 2025-05-17 00:15:22.053 [INFO][4424] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f14f985d83d1befe7413123e16ac001ac95bbe341afb9adba5418bf9c4344204" HandleID="k8s-pod-network.f14f985d83d1befe7413123e16ac001ac95bbe341afb9adba5418bf9c4344204" Workload="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--n4blv-eth0" May 17 00:15:22.144128 containerd[1479]: 2025-05-17 00:15:22.054 [INFO][4424] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f14f985d83d1befe7413123e16ac001ac95bbe341afb9adba5418bf9c4344204" HandleID="k8s-pod-network.f14f985d83d1befe7413123e16ac001ac95bbe341afb9adba5418bf9c4344204" Workload="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--n4blv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d7020), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-n-16326e39d6", "pod":"coredns-668d6bf9bc-n4blv", "timestamp":"2025-05-17 00:15:22.053750766 +0000 UTC"}, Hostname:"ci-4081-3-3-n-16326e39d6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:15:22.144128 containerd[1479]: 2025-05-17 00:15:22.054 [INFO][4424] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:22.144128 containerd[1479]: 2025-05-17 00:15:22.054 [INFO][4424] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:22.144128 containerd[1479]: 2025-05-17 00:15:22.054 [INFO][4424] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-16326e39d6' May 17 00:15:22.144128 containerd[1479]: 2025-05-17 00:15:22.068 [INFO][4424] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f14f985d83d1befe7413123e16ac001ac95bbe341afb9adba5418bf9c4344204" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:22.144128 containerd[1479]: 2025-05-17 00:15:22.075 [INFO][4424] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-16326e39d6" May 17 00:15:22.144128 containerd[1479]: 2025-05-17 00:15:22.082 [INFO][4424] ipam/ipam.go 511: Trying affinity for 192.168.81.128/26 host="ci-4081-3-3-n-16326e39d6" May 17 00:15:22.144128 containerd[1479]: 2025-05-17 00:15:22.085 [INFO][4424] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.128/26 host="ci-4081-3-3-n-16326e39d6" May 17 00:15:22.144128 containerd[1479]: 2025-05-17 00:15:22.089 [INFO][4424] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.128/26 host="ci-4081-3-3-n-16326e39d6" May 17 00:15:22.144128 containerd[1479]: 2025-05-17 00:15:22.090 [INFO][4424] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.128/26 handle="k8s-pod-network.f14f985d83d1befe7413123e16ac001ac95bbe341afb9adba5418bf9c4344204" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:22.144128 containerd[1479]: 2025-05-17 00:15:22.093 [INFO][4424] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f14f985d83d1befe7413123e16ac001ac95bbe341afb9adba5418bf9c4344204 May 17 00:15:22.144128 containerd[1479]: 2025-05-17 00:15:22.101 [INFO][4424] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.128/26 handle="k8s-pod-network.f14f985d83d1befe7413123e16ac001ac95bbe341afb9adba5418bf9c4344204" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:22.144128 containerd[1479]: 2025-05-17 00:15:22.110 [INFO][4424] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.131/26] block=192.168.81.128/26 handle="k8s-pod-network.f14f985d83d1befe7413123e16ac001ac95bbe341afb9adba5418bf9c4344204" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:22.144128 containerd[1479]: 2025-05-17 00:15:22.110 [INFO][4424] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.131/26] handle="k8s-pod-network.f14f985d83d1befe7413123e16ac001ac95bbe341afb9adba5418bf9c4344204" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:22.144128 containerd[1479]: 2025-05-17 00:15:22.110 [INFO][4424] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:22.144128 containerd[1479]: 2025-05-17 00:15:22.110 [INFO][4424] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.131/26] IPv6=[] ContainerID="f14f985d83d1befe7413123e16ac001ac95bbe341afb9adba5418bf9c4344204" HandleID="k8s-pod-network.f14f985d83d1befe7413123e16ac001ac95bbe341afb9adba5418bf9c4344204" Workload="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--n4blv-eth0" May 17 00:15:22.144777 containerd[1479]: 2025-05-17 00:15:22.113 [INFO][4400] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f14f985d83d1befe7413123e16ac001ac95bbe341afb9adba5418bf9c4344204" Namespace="kube-system" Pod="coredns-668d6bf9bc-n4blv" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--n4blv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--n4blv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b30848da-8b46-4cdc-baaa-f3567b6377c3", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-16326e39d6", ContainerID:"", Pod:"coredns-668d6bf9bc-n4blv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali53a1da8fa0f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:22.144777 containerd[1479]: 2025-05-17 00:15:22.114 [INFO][4400] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.131/32] ContainerID="f14f985d83d1befe7413123e16ac001ac95bbe341afb9adba5418bf9c4344204" Namespace="kube-system" Pod="coredns-668d6bf9bc-n4blv" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--n4blv-eth0" May 17 00:15:22.144777 containerd[1479]: 2025-05-17 00:15:22.114 [INFO][4400] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali53a1da8fa0f ContainerID="f14f985d83d1befe7413123e16ac001ac95bbe341afb9adba5418bf9c4344204" Namespace="kube-system" Pod="coredns-668d6bf9bc-n4blv" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--n4blv-eth0" May 17 00:15:22.144777 containerd[1479]: 2025-05-17 00:15:22.118 [INFO][4400] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f14f985d83d1befe7413123e16ac001ac95bbe341afb9adba5418bf9c4344204" Namespace="kube-system" Pod="coredns-668d6bf9bc-n4blv" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--n4blv-eth0" May 17 00:15:22.144777 containerd[1479]: 2025-05-17 00:15:22.118 [INFO][4400] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f14f985d83d1befe7413123e16ac001ac95bbe341afb9adba5418bf9c4344204" Namespace="kube-system" Pod="coredns-668d6bf9bc-n4blv" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--n4blv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--n4blv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b30848da-8b46-4cdc-baaa-f3567b6377c3", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-16326e39d6", ContainerID:"f14f985d83d1befe7413123e16ac001ac95bbe341afb9adba5418bf9c4344204", Pod:"coredns-668d6bf9bc-n4blv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali53a1da8fa0f", MAC:"1a:21:a7:9a:b7:7d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:22.144777 containerd[1479]: 2025-05-17 00:15:22.140 [INFO][4400] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f14f985d83d1befe7413123e16ac001ac95bbe341afb9adba5418bf9c4344204" Namespace="kube-system" Pod="coredns-668d6bf9bc-n4blv" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--n4blv-eth0" May 17 00:15:22.175300 containerd[1479]: time="2025-05-17T00:15:22.175013342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:15:22.175626 containerd[1479]: time="2025-05-17T00:15:22.175254423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:15:22.175626 containerd[1479]: time="2025-05-17T00:15:22.175267143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:15:22.176384 containerd[1479]: time="2025-05-17T00:15:22.176312027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:15:22.203789 systemd[1]: Started cri-containerd-f14f985d83d1befe7413123e16ac001ac95bbe341afb9adba5418bf9c4344204.scope - libcontainer container f14f985d83d1befe7413123e16ac001ac95bbe341afb9adba5418bf9c4344204. May 17 00:15:22.267231 systemd-networkd[1369]: caliee3a0fc1d4d: Link UP May 17 00:15:22.268052 systemd-networkd[1369]: caliee3a0fc1d4d: Gained carrier May 17 00:15:22.282059 containerd[1479]: time="2025-05-17T00:15:22.282006819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n4blv,Uid:b30848da-8b46-4cdc-baaa-f3567b6377c3,Namespace:kube-system,Attempt:1,} returns sandbox id \"f14f985d83d1befe7413123e16ac001ac95bbe341afb9adba5418bf9c4344204\"" May 17 00:15:22.300618 containerd[1479]: 2025-05-17 00:15:22.034 [INFO][4410] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--16326e39d6-k8s-csi--node--driver--fhb48-eth0 csi-node-driver- calico-system bf41328e-d1ed-475d-9a4a-c70bc9451b6f 958 0 2025-05-17 00:14:58 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-3-n-16326e39d6 csi-node-driver-fhb48 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliee3a0fc1d4d [] [] }} ContainerID="0f59d6a32b1795c619c86341b39083c431c0139c7b1909c984da53702643edd9" Namespace="calico-system" Pod="csi-node-driver-fhb48" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-csi--node--driver--fhb48-" May 17 00:15:22.300618 containerd[1479]: 2025-05-17 00:15:22.035 [INFO][4410] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0f59d6a32b1795c619c86341b39083c431c0139c7b1909c984da53702643edd9" Namespace="calico-system" Pod="csi-node-driver-fhb48" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-csi--node--driver--fhb48-eth0" May 17 00:15:22.300618 containerd[1479]: 2025-05-17 00:15:22.070 [INFO][4429] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f59d6a32b1795c619c86341b39083c431c0139c7b1909c984da53702643edd9" HandleID="k8s-pod-network.0f59d6a32b1795c619c86341b39083c431c0139c7b1909c984da53702643edd9" Workload="ci--4081--3--3--n--16326e39d6-k8s-csi--node--driver--fhb48-eth0" May 17 00:15:22.300618 containerd[1479]: 2025-05-17 00:15:22.070 [INFO][4429] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0f59d6a32b1795c619c86341b39083c431c0139c7b1909c984da53702643edd9" HandleID="k8s-pod-network.0f59d6a32b1795c619c86341b39083c431c0139c7b1909c984da53702643edd9" Workload="ci--4081--3--3--n--16326e39d6-k8s-csi--node--driver--fhb48-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002a94d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-16326e39d6", "pod":"csi-node-driver-fhb48", "timestamp":"2025-05-17 00:15:22.070008992 +0000 UTC"}, Hostname:"ci-4081-3-3-n-16326e39d6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:15:22.300618 containerd[1479]: 2025-05-17 00:15:22.070 [INFO][4429] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:22.300618 containerd[1479]: 2025-05-17 00:15:22.110 [INFO][4429] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:22.300618 containerd[1479]: 2025-05-17 00:15:22.111 [INFO][4429] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-16326e39d6' May 17 00:15:22.300618 containerd[1479]: 2025-05-17 00:15:22.169 [INFO][4429] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0f59d6a32b1795c619c86341b39083c431c0139c7b1909c984da53702643edd9" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:22.300618 containerd[1479]: 2025-05-17 00:15:22.178 [INFO][4429] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-16326e39d6" May 17 00:15:22.300618 containerd[1479]: 2025-05-17 00:15:22.193 [INFO][4429] ipam/ipam.go 511: Trying affinity for 192.168.81.128/26 host="ci-4081-3-3-n-16326e39d6" May 17 00:15:22.300618 containerd[1479]: 2025-05-17 00:15:22.200 [INFO][4429] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.128/26 host="ci-4081-3-3-n-16326e39d6" May 17 00:15:22.300618 containerd[1479]: 2025-05-17 00:15:22.205 [INFO][4429] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.128/26 host="ci-4081-3-3-n-16326e39d6" May 17 00:15:22.300618 containerd[1479]: 2025-05-17 00:15:22.205 [INFO][4429] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.128/26 handle="k8s-pod-network.0f59d6a32b1795c619c86341b39083c431c0139c7b1909c984da53702643edd9" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:22.300618 containerd[1479]: 2025-05-17 00:15:22.211 [INFO][4429] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0f59d6a32b1795c619c86341b39083c431c0139c7b1909c984da53702643edd9 May 17 00:15:22.300618 containerd[1479]: 2025-05-17 00:15:22.223 [INFO][4429] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.128/26 handle="k8s-pod-network.0f59d6a32b1795c619c86341b39083c431c0139c7b1909c984da53702643edd9" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:22.300618 containerd[1479]: 2025-05-17 00:15:22.242 [INFO][4429] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.132/26] block=192.168.81.128/26 handle="k8s-pod-network.0f59d6a32b1795c619c86341b39083c431c0139c7b1909c984da53702643edd9" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:22.300618 containerd[1479]: 2025-05-17 00:15:22.242 [INFO][4429] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.132/26] handle="k8s-pod-network.0f59d6a32b1795c619c86341b39083c431c0139c7b1909c984da53702643edd9" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:22.300618 containerd[1479]: 2025-05-17 00:15:22.242 [INFO][4429] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:22.300618 containerd[1479]: 2025-05-17 00:15:22.242 [INFO][4429] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.132/26] IPv6=[] ContainerID="0f59d6a32b1795c619c86341b39083c431c0139c7b1909c984da53702643edd9" HandleID="k8s-pod-network.0f59d6a32b1795c619c86341b39083c431c0139c7b1909c984da53702643edd9" Workload="ci--4081--3--3--n--16326e39d6-k8s-csi--node--driver--fhb48-eth0" May 17 00:15:22.301191 containerd[1479]: 2025-05-17 00:15:22.256 [INFO][4410] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0f59d6a32b1795c619c86341b39083c431c0139c7b1909c984da53702643edd9" Namespace="calico-system" Pod="csi-node-driver-fhb48" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-csi--node--driver--fhb48-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--16326e39d6-k8s-csi--node--driver--fhb48-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bf41328e-d1ed-475d-9a4a-c70bc9451b6f", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-16326e39d6", ContainerID:"", Pod:"csi-node-driver-fhb48", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.81.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliee3a0fc1d4d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:22.301191 containerd[1479]: 2025-05-17 00:15:22.256 [INFO][4410] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.132/32] ContainerID="0f59d6a32b1795c619c86341b39083c431c0139c7b1909c984da53702643edd9" Namespace="calico-system" Pod="csi-node-driver-fhb48" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-csi--node--driver--fhb48-eth0" May 17 00:15:22.301191 containerd[1479]: 2025-05-17 00:15:22.257 [INFO][4410] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliee3a0fc1d4d ContainerID="0f59d6a32b1795c619c86341b39083c431c0139c7b1909c984da53702643edd9" Namespace="calico-system" Pod="csi-node-driver-fhb48" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-csi--node--driver--fhb48-eth0" May 17 00:15:22.301191 containerd[1479]: 2025-05-17 00:15:22.270 [INFO][4410] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f59d6a32b1795c619c86341b39083c431c0139c7b1909c984da53702643edd9" Namespace="calico-system" Pod="csi-node-driver-fhb48" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-csi--node--driver--fhb48-eth0" May 17 00:15:22.301191 containerd[1479]: 2025-05-17 00:15:22.272 [INFO][4410] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0f59d6a32b1795c619c86341b39083c431c0139c7b1909c984da53702643edd9" Namespace="calico-system" Pod="csi-node-driver-fhb48" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-csi--node--driver--fhb48-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--16326e39d6-k8s-csi--node--driver--fhb48-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bf41328e-d1ed-475d-9a4a-c70bc9451b6f", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-16326e39d6", ContainerID:"0f59d6a32b1795c619c86341b39083c431c0139c7b1909c984da53702643edd9", Pod:"csi-node-driver-fhb48", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.81.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliee3a0fc1d4d", MAC:"62:75:9f:c6:66:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:22.301191 containerd[1479]: 2025-05-17 00:15:22.293 [INFO][4410] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0f59d6a32b1795c619c86341b39083c431c0139c7b1909c984da53702643edd9" Namespace="calico-system" Pod="csi-node-driver-fhb48" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-csi--node--driver--fhb48-eth0" May 17 00:15:22.314336 containerd[1479]: time="2025-05-17T00:15:22.313727349Z" level=info msg="CreateContainer within sandbox \"f14f985d83d1befe7413123e16ac001ac95bbe341afb9adba5418bf9c4344204\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:15:22.334332 containerd[1479]: time="2025-05-17T00:15:22.334276113Z" level=info msg="CreateContainer within sandbox \"f14f985d83d1befe7413123e16ac001ac95bbe341afb9adba5418bf9c4344204\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3c7c09f773dbd7e32bc7467cd335b872e6bd81a6e54c573a638fcf0662cc5414\"" May 17 00:15:22.336780 containerd[1479]: time="2025-05-17T00:15:22.336098401Z" level=info msg="StartContainer for \"3c7c09f773dbd7e32bc7467cd335b872e6bd81a6e54c573a638fcf0662cc5414\"" May 17 00:15:22.336780 containerd[1479]: time="2025-05-17T00:15:22.335364118Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:15:22.336780 containerd[1479]: time="2025-05-17T00:15:22.335736119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:15:22.336780 containerd[1479]: time="2025-05-17T00:15:22.335759279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:15:22.337347 containerd[1479]: time="2025-05-17T00:15:22.336759643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:15:22.350286 systemd-networkd[1369]: cali97bc9f24b57: Gained IPv6LL May 17 00:15:22.357695 systemd[1]: Started cri-containerd-0f59d6a32b1795c619c86341b39083c431c0139c7b1909c984da53702643edd9.scope - libcontainer container 0f59d6a32b1795c619c86341b39083c431c0139c7b1909c984da53702643edd9. May 17 00:15:22.382779 systemd[1]: Started cri-containerd-3c7c09f773dbd7e32bc7467cd335b872e6bd81a6e54c573a638fcf0662cc5414.scope - libcontainer container 3c7c09f773dbd7e32bc7467cd335b872e6bd81a6e54c573a638fcf0662cc5414. May 17 00:15:22.401781 containerd[1479]: time="2025-05-17T00:15:22.401011146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fhb48,Uid:bf41328e-d1ed-475d-9a4a-c70bc9451b6f,Namespace:calico-system,Attempt:1,} returns sandbox id \"0f59d6a32b1795c619c86341b39083c431c0139c7b1909c984da53702643edd9\"" May 17 00:15:22.426602 containerd[1479]: time="2025-05-17T00:15:22.426481251Z" level=info msg="StartContainer for \"3c7c09f773dbd7e32bc7467cd335b872e6bd81a6e54c573a638fcf0662cc5414\" returns successfully" May 17 00:15:22.777838 containerd[1479]: time="2025-05-17T00:15:22.777794728Z" level=info msg="StopPodSandbox for \"86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418\"" May 17 00:15:22.780800 containerd[1479]: time="2025-05-17T00:15:22.778783572Z" level=info msg="StopPodSandbox for \"648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9\"" May 17 00:15:22.935605 containerd[1479]: 2025-05-17 00:15:22.860 [INFO][4592] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" May 17 00:15:22.935605 containerd[1479]: 2025-05-17 00:15:22.861 [INFO][4592] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" iface="eth0" netns="/var/run/netns/cni-493268c8-8827-69e2-e266-411658656a84" May 17 00:15:22.935605 containerd[1479]: 2025-05-17 00:15:22.861 [INFO][4592] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" iface="eth0" netns="/var/run/netns/cni-493268c8-8827-69e2-e266-411658656a84" May 17 00:15:22.935605 containerd[1479]: 2025-05-17 00:15:22.862 [INFO][4592] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" iface="eth0" netns="/var/run/netns/cni-493268c8-8827-69e2-e266-411658656a84" May 17 00:15:22.935605 containerd[1479]: 2025-05-17 00:15:22.862 [INFO][4592] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" May 17 00:15:22.935605 containerd[1479]: 2025-05-17 00:15:22.862 [INFO][4592] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" May 17 00:15:22.935605 containerd[1479]: 2025-05-17 00:15:22.909 [INFO][4606] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" HandleID="k8s-pod-network.86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--hk4z6-eth0" May 17 00:15:22.935605 containerd[1479]: 2025-05-17 00:15:22.909 [INFO][4606] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:22.935605 containerd[1479]: 2025-05-17 00:15:22.909 [INFO][4606] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:22.935605 containerd[1479]: 2025-05-17 00:15:22.924 [WARNING][4606] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" HandleID="k8s-pod-network.86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--hk4z6-eth0" May 17 00:15:22.935605 containerd[1479]: 2025-05-17 00:15:22.924 [INFO][4606] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" HandleID="k8s-pod-network.86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--hk4z6-eth0" May 17 00:15:22.935605 containerd[1479]: 2025-05-17 00:15:22.928 [INFO][4606] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:22.935605 containerd[1479]: 2025-05-17 00:15:22.931 [INFO][4592] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" May 17 00:15:22.936068 containerd[1479]: time="2025-05-17T00:15:22.935973615Z" level=info msg="TearDown network for sandbox \"86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418\" successfully" May 17 00:15:22.936068 containerd[1479]: time="2025-05-17T00:15:22.936037055Z" level=info msg="StopPodSandbox for \"86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418\" returns successfully" May 17 00:15:22.942032 systemd[1]: run-netns-cni\x2d493268c8\x2d8827\x2d69e2\x2de266\x2d411658656a84.mount: Deactivated successfully. May 17 00:15:22.949488 containerd[1479]: time="2025-05-17T00:15:22.948716547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbf8c7948-hk4z6,Uid:ceb2f628-4f33-47aa-8305-d46713261d40,Namespace:calico-apiserver,Attempt:1,}" May 17 00:15:22.957318 containerd[1479]: 2025-05-17 00:15:22.879 [INFO][4593] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" May 17 00:15:22.957318 containerd[1479]: 2025-05-17 00:15:22.880 [INFO][4593] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" iface="eth0" netns="/var/run/netns/cni-668f931f-a25a-dff4-18c7-78f19b2e8ad1" May 17 00:15:22.957318 containerd[1479]: 2025-05-17 00:15:22.882 [INFO][4593] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" iface="eth0" netns="/var/run/netns/cni-668f931f-a25a-dff4-18c7-78f19b2e8ad1" May 17 00:15:22.957318 containerd[1479]: 2025-05-17 00:15:22.883 [INFO][4593] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" iface="eth0" netns="/var/run/netns/cni-668f931f-a25a-dff4-18c7-78f19b2e8ad1" May 17 00:15:22.957318 containerd[1479]: 2025-05-17 00:15:22.883 [INFO][4593] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" May 17 00:15:22.957318 containerd[1479]: 2025-05-17 00:15:22.883 [INFO][4593] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" May 17 00:15:22.957318 containerd[1479]: 2025-05-17 00:15:22.915 [INFO][4612] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" HandleID="k8s-pod-network.648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--kube--controllers--7d6599b8b4--52gm9-eth0" May 17 00:15:22.957318 containerd[1479]: 2025-05-17 00:15:22.916 [INFO][4612] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:22.957318 containerd[1479]: 2025-05-17 00:15:22.929 [INFO][4612] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:22.957318 containerd[1479]: 2025-05-17 00:15:22.948 [WARNING][4612] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" HandleID="k8s-pod-network.648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--kube--controllers--7d6599b8b4--52gm9-eth0" May 17 00:15:22.957318 containerd[1479]: 2025-05-17 00:15:22.948 [INFO][4612] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" HandleID="k8s-pod-network.648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--kube--controllers--7d6599b8b4--52gm9-eth0" May 17 00:15:22.957318 containerd[1479]: 2025-05-17 00:15:22.951 [INFO][4612] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:22.957318 containerd[1479]: 2025-05-17 00:15:22.955 [INFO][4593] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" May 17 00:15:22.957902 containerd[1479]: time="2025-05-17T00:15:22.957715344Z" level=info msg="TearDown network for sandbox \"648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9\" successfully" May 17 00:15:22.957902 containerd[1479]: time="2025-05-17T00:15:22.957744824Z" level=info msg="StopPodSandbox for \"648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9\" returns successfully" May 17 00:15:22.959349 containerd[1479]: time="2025-05-17T00:15:22.959278270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d6599b8b4-52gm9,Uid:8165c08e-7c9f-40c3-8125-d662038241a2,Namespace:calico-system,Attempt:1,}" May 17 00:15:22.961642 systemd[1]: run-netns-cni\x2d668f931f\x2da25a\x2ddff4\x2d18c7\x2d78f19b2e8ad1.mount: Deactivated successfully. May 17 00:15:23.053259 kubelet[2670]: I0517 00:15:23.052536 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-n4blv" podStartSLOduration=42.052511728 podStartE2EDuration="42.052511728s" podCreationTimestamp="2025-05-17 00:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:15:23.0504068 +0000 UTC m=+47.426633850" watchObservedRunningTime="2025-05-17 00:15:23.052511728 +0000 UTC m=+47.428738818" May 17 00:15:23.191115 systemd-networkd[1369]: cali1017b69ae60: Link UP May 17 00:15:23.191691 systemd-networkd[1369]: cali1017b69ae60: Gained carrier May 17 00:15:23.227150 containerd[1479]: 2025-05-17 00:15:23.033 [INFO][4619] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--hk4z6-eth0 calico-apiserver-6cbf8c7948- calico-apiserver ceb2f628-4f33-47aa-8305-d46713261d40 973 0 2025-05-17 00:14:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cbf8c7948 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-n-16326e39d6 calico-apiserver-6cbf8c7948-hk4z6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1017b69ae60 [] [] }} ContainerID="7cc6e6922ca134e7502e3d1a4f281ce1d42835300cbb5aa236449d4e4645c4ae" Namespace="calico-apiserver" Pod="calico-apiserver-6cbf8c7948-hk4z6" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--hk4z6-" May 17 00:15:23.227150 containerd[1479]: 2025-05-17 00:15:23.034 [INFO][4619] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7cc6e6922ca134e7502e3d1a4f281ce1d42835300cbb5aa236449d4e4645c4ae" Namespace="calico-apiserver" Pod="calico-apiserver-6cbf8c7948-hk4z6" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--hk4z6-eth0" May 17 00:15:23.227150 containerd[1479]: 2025-05-17 00:15:23.115 [INFO][4644] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7cc6e6922ca134e7502e3d1a4f281ce1d42835300cbb5aa236449d4e4645c4ae" HandleID="k8s-pod-network.7cc6e6922ca134e7502e3d1a4f281ce1d42835300cbb5aa236449d4e4645c4ae" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--hk4z6-eth0" May 17 00:15:23.227150 containerd[1479]: 2025-05-17 00:15:23.116 [INFO][4644] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7cc6e6922ca134e7502e3d1a4f281ce1d42835300cbb5aa236449d4e4645c4ae" HandleID="k8s-pod-network.7cc6e6922ca134e7502e3d1a4f281ce1d42835300cbb5aa236449d4e4645c4ae" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--hk4z6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d7720), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-n-16326e39d6", "pod":"calico-apiserver-6cbf8c7948-hk4z6", "timestamp":"2025-05-17 00:15:23.115395821 +0000 UTC"}, Hostname:"ci-4081-3-3-n-16326e39d6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:15:23.227150 containerd[1479]: 2025-05-17 00:15:23.117 [INFO][4644] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:23.227150 containerd[1479]: 2025-05-17 00:15:23.117 [INFO][4644] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:23.227150 containerd[1479]: 2025-05-17 00:15:23.117 [INFO][4644] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-16326e39d6' May 17 00:15:23.227150 containerd[1479]: 2025-05-17 00:15:23.137 [INFO][4644] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7cc6e6922ca134e7502e3d1a4f281ce1d42835300cbb5aa236449d4e4645c4ae" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:23.227150 containerd[1479]: 2025-05-17 00:15:23.145 [INFO][4644] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-16326e39d6" May 17 00:15:23.227150 containerd[1479]: 2025-05-17 00:15:23.152 [INFO][4644] ipam/ipam.go 511: Trying affinity for 192.168.81.128/26 host="ci-4081-3-3-n-16326e39d6" May 17 00:15:23.227150 containerd[1479]: 2025-05-17 00:15:23.156 [INFO][4644] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.128/26 host="ci-4081-3-3-n-16326e39d6" May 17 00:15:23.227150 containerd[1479]: 2025-05-17 00:15:23.159 [INFO][4644] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.128/26 host="ci-4081-3-3-n-16326e39d6" May 17 00:15:23.227150 containerd[1479]: 2025-05-17 00:15:23.159 [INFO][4644] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.128/26 handle="k8s-pod-network.7cc6e6922ca134e7502e3d1a4f281ce1d42835300cbb5aa236449d4e4645c4ae" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:23.227150 containerd[1479]: 2025-05-17 00:15:23.162 [INFO][4644] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7cc6e6922ca134e7502e3d1a4f281ce1d42835300cbb5aa236449d4e4645c4ae May 17 00:15:23.227150 containerd[1479]: 2025-05-17 00:15:23.168 [INFO][4644] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.128/26 handle="k8s-pod-network.7cc6e6922ca134e7502e3d1a4f281ce1d42835300cbb5aa236449d4e4645c4ae" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:23.227150 containerd[1479]: 2025-05-17 00:15:23.181 [INFO][4644] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.133/26] block=192.168.81.128/26 handle="k8s-pod-network.7cc6e6922ca134e7502e3d1a4f281ce1d42835300cbb5aa236449d4e4645c4ae" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:23.227150 containerd[1479]: 2025-05-17 00:15:23.181 [INFO][4644] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.133/26] handle="k8s-pod-network.7cc6e6922ca134e7502e3d1a4f281ce1d42835300cbb5aa236449d4e4645c4ae" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:23.227150 containerd[1479]: 2025-05-17 00:15:23.181 [INFO][4644] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:23.227150 containerd[1479]: 2025-05-17 00:15:23.181 [INFO][4644] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.133/26] IPv6=[] ContainerID="7cc6e6922ca134e7502e3d1a4f281ce1d42835300cbb5aa236449d4e4645c4ae" HandleID="k8s-pod-network.7cc6e6922ca134e7502e3d1a4f281ce1d42835300cbb5aa236449d4e4645c4ae" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--hk4z6-eth0" May 17 00:15:23.228188 containerd[1479]: 2025-05-17 00:15:23.184 [INFO][4619] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7cc6e6922ca134e7502e3d1a4f281ce1d42835300cbb5aa236449d4e4645c4ae" Namespace="calico-apiserver" Pod="calico-apiserver-6cbf8c7948-hk4z6" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--hk4z6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--hk4z6-eth0", GenerateName:"calico-apiserver-6cbf8c7948-", Namespace:"calico-apiserver", SelfLink:"", UID:"ceb2f628-4f33-47aa-8305-d46713261d40", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cbf8c7948", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-16326e39d6", ContainerID:"", Pod:"calico-apiserver-6cbf8c7948-hk4z6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1017b69ae60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:23.228188 containerd[1479]: 2025-05-17 00:15:23.184 [INFO][4619] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.133/32] ContainerID="7cc6e6922ca134e7502e3d1a4f281ce1d42835300cbb5aa236449d4e4645c4ae" Namespace="calico-apiserver" Pod="calico-apiserver-6cbf8c7948-hk4z6" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--hk4z6-eth0" May 17 00:15:23.228188 containerd[1479]: 2025-05-17 00:15:23.184 [INFO][4619] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1017b69ae60 ContainerID="7cc6e6922ca134e7502e3d1a4f281ce1d42835300cbb5aa236449d4e4645c4ae" Namespace="calico-apiserver" Pod="calico-apiserver-6cbf8c7948-hk4z6" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--hk4z6-eth0" May 17 00:15:23.228188 containerd[1479]: 2025-05-17 00:15:23.193 [INFO][4619] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7cc6e6922ca134e7502e3d1a4f281ce1d42835300cbb5aa236449d4e4645c4ae" Namespace="calico-apiserver" Pod="calico-apiserver-6cbf8c7948-hk4z6" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--hk4z6-eth0" May 17 00:15:23.228188 containerd[1479]: 2025-05-17 00:15:23.194 [INFO][4619] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7cc6e6922ca134e7502e3d1a4f281ce1d42835300cbb5aa236449d4e4645c4ae" Namespace="calico-apiserver" Pod="calico-apiserver-6cbf8c7948-hk4z6" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--hk4z6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--hk4z6-eth0", GenerateName:"calico-apiserver-6cbf8c7948-", Namespace:"calico-apiserver", SelfLink:"", UID:"ceb2f628-4f33-47aa-8305-d46713261d40", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cbf8c7948", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-16326e39d6", ContainerID:"7cc6e6922ca134e7502e3d1a4f281ce1d42835300cbb5aa236449d4e4645c4ae", Pod:"calico-apiserver-6cbf8c7948-hk4z6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1017b69ae60", MAC:"06:aa:16:c2:f9:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:23.228188 containerd[1479]: 2025-05-17 00:15:23.221 [INFO][4619] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7cc6e6922ca134e7502e3d1a4f281ce1d42835300cbb5aa236449d4e4645c4ae" Namespace="calico-apiserver" Pod="calico-apiserver-6cbf8c7948-hk4z6" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--hk4z6-eth0" May 17 00:15:23.273121 containerd[1479]: time="2025-05-17T00:15:23.272790655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:15:23.273121 containerd[1479]: time="2025-05-17T00:15:23.272855055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:15:23.273121 containerd[1479]: time="2025-05-17T00:15:23.272867255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:15:23.273121 containerd[1479]: time="2025-05-17T00:15:23.272955455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:15:23.300684 systemd[1]: Started cri-containerd-7cc6e6922ca134e7502e3d1a4f281ce1d42835300cbb5aa236449d4e4645c4ae.scope - libcontainer container 7cc6e6922ca134e7502e3d1a4f281ce1d42835300cbb5aa236449d4e4645c4ae. May 17 00:15:23.316017 systemd-networkd[1369]: calif6a17f2ee49: Link UP May 17 00:15:23.317996 systemd-networkd[1369]: calif6a17f2ee49: Gained carrier May 17 00:15:23.342707 containerd[1479]: 2025-05-17 00:15:23.082 [INFO][4629] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--16326e39d6-k8s-calico--kube--controllers--7d6599b8b4--52gm9-eth0 calico-kube-controllers-7d6599b8b4- calico-system 8165c08e-7c9f-40c3-8125-d662038241a2 974 0 2025-05-17 00:14:58 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7d6599b8b4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-3-n-16326e39d6 calico-kube-controllers-7d6599b8b4-52gm9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif6a17f2ee49 [] [] }} ContainerID="61bd9fa28d913b161bc4aa6136879719c72cd7b55de528afbf04ca387a9e7f49" Namespace="calico-system" Pod="calico-kube-controllers-7d6599b8b4-52gm9" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-calico--kube--controllers--7d6599b8b4--52gm9-" May 17 00:15:23.342707 containerd[1479]: 2025-05-17 00:15:23.082 [INFO][4629] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="61bd9fa28d913b161bc4aa6136879719c72cd7b55de528afbf04ca387a9e7f49" Namespace="calico-system" Pod="calico-kube-controllers-7d6599b8b4-52gm9" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-calico--kube--controllers--7d6599b8b4--52gm9-eth0" May 17 00:15:23.342707 containerd[1479]: 2025-05-17 00:15:23.149 [INFO][4652] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="61bd9fa28d913b161bc4aa6136879719c72cd7b55de528afbf04ca387a9e7f49" HandleID="k8s-pod-network.61bd9fa28d913b161bc4aa6136879719c72cd7b55de528afbf04ca387a9e7f49" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--kube--controllers--7d6599b8b4--52gm9-eth0" May 17 00:15:23.342707 containerd[1479]: 2025-05-17 00:15:23.149 [INFO][4652] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="61bd9fa28d913b161bc4aa6136879719c72cd7b55de528afbf04ca387a9e7f49" HandleID="k8s-pod-network.61bd9fa28d913b161bc4aa6136879719c72cd7b55de528afbf04ca387a9e7f49" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--kube--controllers--7d6599b8b4--52gm9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400022f630), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-16326e39d6", "pod":"calico-kube-controllers-7d6599b8b4-52gm9", "timestamp":"2025-05-17 00:15:23.149099397 +0000 UTC"}, Hostname:"ci-4081-3-3-n-16326e39d6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:15:23.342707 containerd[1479]: 2025-05-17 00:15:23.149 [INFO][4652] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:23.342707 containerd[1479]: 2025-05-17 00:15:23.181 [INFO][4652] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:23.342707 containerd[1479]: 2025-05-17 00:15:23.181 [INFO][4652] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-16326e39d6' May 17 00:15:23.342707 containerd[1479]: 2025-05-17 00:15:23.238 [INFO][4652] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.61bd9fa28d913b161bc4aa6136879719c72cd7b55de528afbf04ca387a9e7f49" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:23.342707 containerd[1479]: 2025-05-17 00:15:23.245 [INFO][4652] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-16326e39d6" May 17 00:15:23.342707 containerd[1479]: 2025-05-17 00:15:23.261 [INFO][4652] ipam/ipam.go 511: Trying affinity for 192.168.81.128/26 host="ci-4081-3-3-n-16326e39d6" May 17 00:15:23.342707 containerd[1479]: 2025-05-17 00:15:23.265 [INFO][4652] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.128/26 host="ci-4081-3-3-n-16326e39d6" May 17 00:15:23.342707 containerd[1479]: 2025-05-17 00:15:23.274 [INFO][4652] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.128/26 host="ci-4081-3-3-n-16326e39d6" May 17 00:15:23.342707 containerd[1479]: 2025-05-17 00:15:23.274 [INFO][4652] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.128/26 handle="k8s-pod-network.61bd9fa28d913b161bc4aa6136879719c72cd7b55de528afbf04ca387a9e7f49" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:23.342707 containerd[1479]: 2025-05-17 00:15:23.279 [INFO][4652] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.61bd9fa28d913b161bc4aa6136879719c72cd7b55de528afbf04ca387a9e7f49 May 17 00:15:23.342707 containerd[1479]: 2025-05-17 00:15:23.288 [INFO][4652] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.128/26 handle="k8s-pod-network.61bd9fa28d913b161bc4aa6136879719c72cd7b55de528afbf04ca387a9e7f49" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:23.342707 containerd[1479]: 2025-05-17 00:15:23.306 [INFO][4652] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.134/26] block=192.168.81.128/26 handle="k8s-pod-network.61bd9fa28d913b161bc4aa6136879719c72cd7b55de528afbf04ca387a9e7f49" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:23.342707 containerd[1479]: 2025-05-17 00:15:23.306 [INFO][4652] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.134/26] handle="k8s-pod-network.61bd9fa28d913b161bc4aa6136879719c72cd7b55de528afbf04ca387a9e7f49" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:23.342707 containerd[1479]: 2025-05-17 00:15:23.306 [INFO][4652] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:23.342707 containerd[1479]: 2025-05-17 00:15:23.306 [INFO][4652] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.134/26] IPv6=[] ContainerID="61bd9fa28d913b161bc4aa6136879719c72cd7b55de528afbf04ca387a9e7f49" HandleID="k8s-pod-network.61bd9fa28d913b161bc4aa6136879719c72cd7b55de528afbf04ca387a9e7f49" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--kube--controllers--7d6599b8b4--52gm9-eth0" May 17 00:15:23.343285 containerd[1479]: 2025-05-17 00:15:23.310 [INFO][4629] cni-plugin/k8s.go 418: Populated endpoint ContainerID="61bd9fa28d913b161bc4aa6136879719c72cd7b55de528afbf04ca387a9e7f49" Namespace="calico-system" Pod="calico-kube-controllers-7d6599b8b4-52gm9" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-calico--kube--controllers--7d6599b8b4--52gm9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--16326e39d6-k8s-calico--kube--controllers--7d6599b8b4--52gm9-eth0", GenerateName:"calico-kube-controllers-7d6599b8b4-", Namespace:"calico-system", SelfLink:"", UID:"8165c08e-7c9f-40c3-8125-d662038241a2", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d6599b8b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-16326e39d6", ContainerID:"", Pod:"calico-kube-controllers-7d6599b8b4-52gm9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.81.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif6a17f2ee49", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:23.343285 containerd[1479]: 2025-05-17 00:15:23.310 [INFO][4629] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.134/32] ContainerID="61bd9fa28d913b161bc4aa6136879719c72cd7b55de528afbf04ca387a9e7f49" Namespace="calico-system" Pod="calico-kube-controllers-7d6599b8b4-52gm9" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-calico--kube--controllers--7d6599b8b4--52gm9-eth0" May 17 00:15:23.343285 containerd[1479]: 2025-05-17 00:15:23.310 [INFO][4629] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6a17f2ee49 ContainerID="61bd9fa28d913b161bc4aa6136879719c72cd7b55de528afbf04ca387a9e7f49" Namespace="calico-system" Pod="calico-kube-controllers-7d6599b8b4-52gm9" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-calico--kube--controllers--7d6599b8b4--52gm9-eth0" May 17 00:15:23.343285 containerd[1479]: 2025-05-17 00:15:23.321 [INFO][4629] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="61bd9fa28d913b161bc4aa6136879719c72cd7b55de528afbf04ca387a9e7f49" Namespace="calico-system" Pod="calico-kube-controllers-7d6599b8b4-52gm9" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-calico--kube--controllers--7d6599b8b4--52gm9-eth0" May 17 00:15:23.343285 containerd[1479]: 2025-05-17 00:15:23.322 [INFO][4629] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="61bd9fa28d913b161bc4aa6136879719c72cd7b55de528afbf04ca387a9e7f49" Namespace="calico-system" Pod="calico-kube-controllers-7d6599b8b4-52gm9" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-calico--kube--controllers--7d6599b8b4--52gm9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--16326e39d6-k8s-calico--kube--controllers--7d6599b8b4--52gm9-eth0", GenerateName:"calico-kube-controllers-7d6599b8b4-", Namespace:"calico-system", SelfLink:"", UID:"8165c08e-7c9f-40c3-8125-d662038241a2", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d6599b8b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-16326e39d6", ContainerID:"61bd9fa28d913b161bc4aa6136879719c72cd7b55de528afbf04ca387a9e7f49", Pod:"calico-kube-controllers-7d6599b8b4-52gm9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.81.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif6a17f2ee49", MAC:"f6:7a:ce:c8:b5:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:23.343285 containerd[1479]: 2025-05-17 00:15:23.338 [INFO][4629] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="61bd9fa28d913b161bc4aa6136879719c72cd7b55de528afbf04ca387a9e7f49" Namespace="calico-system" Pod="calico-kube-controllers-7d6599b8b4-52gm9" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-calico--kube--controllers--7d6599b8b4--52gm9-eth0" May 17 00:15:23.373114 containerd[1479]: time="2025-05-17T00:15:23.373008538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:15:23.373114 containerd[1479]: time="2025-05-17T00:15:23.373071618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:15:23.373114 containerd[1479]: time="2025-05-17T00:15:23.373090298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:15:23.374545 containerd[1479]: time="2025-05-17T00:15:23.373903301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:15:23.405242 containerd[1479]: time="2025-05-17T00:15:23.405191827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbf8c7948-hk4z6,Uid:ceb2f628-4f33-47aa-8305-d46713261d40,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7cc6e6922ca134e7502e3d1a4f281ce1d42835300cbb5aa236449d4e4645c4ae\"" May 17 00:15:23.411634 systemd[1]: Started cri-containerd-61bd9fa28d913b161bc4aa6136879719c72cd7b55de528afbf04ca387a9e7f49.scope - libcontainer container 61bd9fa28d913b161bc4aa6136879719c72cd7b55de528afbf04ca387a9e7f49. May 17 00:15:23.450750 containerd[1479]: time="2025-05-17T00:15:23.450602770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d6599b8b4-52gm9,Uid:8165c08e-7c9f-40c3-8125-d662038241a2,Namespace:calico-system,Attempt:1,} returns sandbox id \"61bd9fa28d913b161bc4aa6136879719c72cd7b55de528afbf04ca387a9e7f49\"" May 17 00:15:24.013837 systemd-networkd[1369]: caliee3a0fc1d4d: Gained IPv6LL May 17 00:15:24.142185 systemd-networkd[1369]: cali53a1da8fa0f: Gained IPv6LL May 17 00:15:24.333904 systemd-networkd[1369]: calif6a17f2ee49: Gained IPv6LL May 17 00:15:24.403330 containerd[1479]: time="2025-05-17T00:15:24.403122935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:24.405189 containerd[1479]: time="2025-05-17T00:15:24.404212819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=44453213" May 17 00:15:24.406280 containerd[1479]: time="2025-05-17T00:15:24.406229067Z" level=info msg="ImageCreate event name:\"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:24.409000 containerd[1479]: time="2025-05-17T00:15:24.408940238Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:24.409942 containerd[1479]: time="2025-05-17T00:15:24.409891002Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"45822470\" in 3.266831263s" May 17 00:15:24.410078 containerd[1479]: time="2025-05-17T00:15:24.410060882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\"" May 17 00:15:24.414001 containerd[1479]: time="2025-05-17T00:15:24.413405216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 00:15:24.414507 containerd[1479]: time="2025-05-17T00:15:24.414467900Z" level=info msg="CreateContainer within sandbox \"fd491bd53711038e437cc1447d10887e0ae489d15c94fc55a6255dae964dd001\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:15:24.443547 containerd[1479]: time="2025-05-17T00:15:24.443504615Z" level=info msg="CreateContainer within sandbox \"fd491bd53711038e437cc1447d10887e0ae489d15c94fc55a6255dae964dd001\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ed4b4181221ce2a70c0debcac38f25fd1ec36c41d6b4e0bd02192c868886b406\"" May 17 00:15:24.446690 containerd[1479]: time="2025-05-17T00:15:24.445662503Z" level=info msg="StartContainer for \"ed4b4181221ce2a70c0debcac38f25fd1ec36c41d6b4e0bd02192c868886b406\"" May 17 00:15:24.495957 systemd[1]: Started cri-containerd-ed4b4181221ce2a70c0debcac38f25fd1ec36c41d6b4e0bd02192c868886b406.scope - libcontainer container ed4b4181221ce2a70c0debcac38f25fd1ec36c41d6b4e0bd02192c868886b406. May 17 00:15:24.553558 containerd[1479]: time="2025-05-17T00:15:24.552953168Z" level=info msg="StartContainer for \"ed4b4181221ce2a70c0debcac38f25fd1ec36c41d6b4e0bd02192c868886b406\" returns successfully" May 17 00:15:24.756240 containerd[1479]: time="2025-05-17T00:15:24.756124852Z" level=info msg="StopPodSandbox for \"d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e\"" May 17 00:15:24.756915 containerd[1479]: time="2025-05-17T00:15:24.756387013Z" level=info msg="StopPodSandbox for \"e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c\"" May 17 00:15:24.781727 systemd-networkd[1369]: cali1017b69ae60: Gained IPv6LL May 17 00:15:24.854374 kubelet[2670]: I0517 00:15:24.854336 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:15:25.024979 containerd[1479]: 2025-05-17 00:15:24.883 [INFO][4826] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" May 17 00:15:25.024979 containerd[1479]: 2025-05-17 00:15:24.883 [INFO][4826] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" iface="eth0" netns="/var/run/netns/cni-e5052c66-1314-6e2e-5334-59750f946d14" May 17 00:15:25.024979 containerd[1479]: 2025-05-17 00:15:24.884 [INFO][4826] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" iface="eth0" netns="/var/run/netns/cni-e5052c66-1314-6e2e-5334-59750f946d14" May 17 00:15:25.024979 containerd[1479]: 2025-05-17 00:15:24.890 [INFO][4826] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" iface="eth0" netns="/var/run/netns/cni-e5052c66-1314-6e2e-5334-59750f946d14" May 17 00:15:25.024979 containerd[1479]: 2025-05-17 00:15:24.890 [INFO][4826] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" May 17 00:15:25.024979 containerd[1479]: 2025-05-17 00:15:24.890 [INFO][4826] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" May 17 00:15:25.024979 containerd[1479]: 2025-05-17 00:15:24.999 [INFO][4844] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" HandleID="k8s-pod-network.e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" Workload="ci--4081--3--3--n--16326e39d6-k8s-goldmane--78d55f7ddc--fqppr-eth0" May 17 00:15:25.024979 containerd[1479]: 2025-05-17 00:15:24.999 [INFO][4844] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:25.024979 containerd[1479]: 2025-05-17 00:15:24.999 [INFO][4844] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:25.024979 containerd[1479]: 2025-05-17 00:15:25.012 [WARNING][4844] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" HandleID="k8s-pod-network.e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" Workload="ci--4081--3--3--n--16326e39d6-k8s-goldmane--78d55f7ddc--fqppr-eth0" May 17 00:15:25.024979 containerd[1479]: 2025-05-17 00:15:25.012 [INFO][4844] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" HandleID="k8s-pod-network.e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" Workload="ci--4081--3--3--n--16326e39d6-k8s-goldmane--78d55f7ddc--fqppr-eth0" May 17 00:15:25.024979 containerd[1479]: 2025-05-17 00:15:25.018 [INFO][4844] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:25.024979 containerd[1479]: 2025-05-17 00:15:25.022 [INFO][4826] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" May 17 00:15:25.025697 containerd[1479]: time="2025-05-17T00:15:25.025540556Z" level=info msg="TearDown network for sandbox \"e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c\" successfully" May 17 00:15:25.025697 containerd[1479]: time="2025-05-17T00:15:25.025585996Z" level=info msg="StopPodSandbox for \"e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c\" returns successfully" May 17 00:15:25.027673 containerd[1479]: time="2025-05-17T00:15:25.027630644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-fqppr,Uid:a9b260fc-ff83-4de9-ac43-723c22c032c2,Namespace:calico-system,Attempt:1,}" May 17 00:15:25.031349 systemd[1]: run-netns-cni\x2de5052c66\x2d1314\x2d6e2e\x2d5334\x2d59750f946d14.mount: Deactivated successfully. May 17 00:15:25.053249 containerd[1479]: 2025-05-17 00:15:24.914 [INFO][4831] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" May 17 00:15:25.053249 containerd[1479]: 2025-05-17 00:15:24.915 [INFO][4831] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" iface="eth0" netns="/var/run/netns/cni-947a9a50-a3f7-9979-6d3c-a7437741dd15" May 17 00:15:25.053249 containerd[1479]: 2025-05-17 00:15:24.915 [INFO][4831] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" iface="eth0" netns="/var/run/netns/cni-947a9a50-a3f7-9979-6d3c-a7437741dd15" May 17 00:15:25.053249 containerd[1479]: 2025-05-17 00:15:24.916 [INFO][4831] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" iface="eth0" netns="/var/run/netns/cni-947a9a50-a3f7-9979-6d3c-a7437741dd15" May 17 00:15:25.053249 containerd[1479]: 2025-05-17 00:15:24.916 [INFO][4831] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" May 17 00:15:25.053249 containerd[1479]: 2025-05-17 00:15:24.916 [INFO][4831] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" May 17 00:15:25.053249 containerd[1479]: 2025-05-17 00:15:25.002 [INFO][4860] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" HandleID="k8s-pod-network.d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" Workload="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--wpn2m-eth0" May 17 00:15:25.053249 containerd[1479]: 2025-05-17 00:15:25.002 [INFO][4860] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:25.053249 containerd[1479]: 2025-05-17 00:15:25.019 [INFO][4860] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:25.053249 containerd[1479]: 2025-05-17 00:15:25.039 [WARNING][4860] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" HandleID="k8s-pod-network.d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" Workload="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--wpn2m-eth0" May 17 00:15:25.053249 containerd[1479]: 2025-05-17 00:15:25.039 [INFO][4860] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" HandleID="k8s-pod-network.d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" Workload="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--wpn2m-eth0" May 17 00:15:25.053249 containerd[1479]: 2025-05-17 00:15:25.042 [INFO][4860] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:25.053249 containerd[1479]: 2025-05-17 00:15:25.045 [INFO][4831] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" May 17 00:15:25.056463 containerd[1479]: time="2025-05-17T00:15:25.054776910Z" level=info msg="TearDown network for sandbox \"d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e\" successfully" May 17 00:15:25.056463 containerd[1479]: time="2025-05-17T00:15:25.054815870Z" level=info msg="StopPodSandbox for \"d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e\" returns successfully" May 17 00:15:25.059797 containerd[1479]: time="2025-05-17T00:15:25.059758489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wpn2m,Uid:a7794f9e-6b8e-4656-8525-16c2f94584b5,Namespace:kube-system,Attempt:1,}" May 17 00:15:25.061964 systemd[1]: run-netns-cni\x2d947a9a50\x2da3f7\x2d9979\x2d6d3c\x2da7437741dd15.mount: Deactivated successfully. May 17 00:15:25.228764 kubelet[2670]: I0517 00:15:25.228159 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6cbf8c7948-6pm57" podStartSLOduration=29.957590347 podStartE2EDuration="33.228141145s" podCreationTimestamp="2025-05-17 00:14:52 +0000 UTC" firstStartedPulling="2025-05-17 00:15:21.140498688 +0000 UTC m=+45.516725738" lastFinishedPulling="2025-05-17 00:15:24.411049486 +0000 UTC m=+48.787276536" observedRunningTime="2025-05-17 00:15:25.099925406 +0000 UTC m=+49.476152456" watchObservedRunningTime="2025-05-17 00:15:25.228141145 +0000 UTC m=+49.604368195" May 17 00:15:25.317700 systemd-networkd[1369]: cali4b72b095a19: Link UP May 17 00:15:25.320777 systemd-networkd[1369]: cali4b72b095a19: Gained carrier May 17 00:15:25.347226 containerd[1479]: 2025-05-17 00:15:25.161 [INFO][4881] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--16326e39d6-k8s-goldmane--78d55f7ddc--fqppr-eth0 goldmane-78d55f7ddc- calico-system a9b260fc-ff83-4de9-ac43-723c22c032c2 1001 0 2025-05-17 00:14:58 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-3-n-16326e39d6 goldmane-78d55f7ddc-fqppr eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4b72b095a19 [] [] }} ContainerID="b051fe0d064e85464022dd0cddb643a4fb5e4989b89ee311a26f2f2a5e8c57ac" Namespace="calico-system" Pod="goldmane-78d55f7ddc-fqppr" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-goldmane--78d55f7ddc--fqppr-" May 17 00:15:25.347226 containerd[1479]: 2025-05-17 00:15:25.161 [INFO][4881] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b051fe0d064e85464022dd0cddb643a4fb5e4989b89ee311a26f2f2a5e8c57ac" Namespace="calico-system" Pod="goldmane-78d55f7ddc-fqppr" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-goldmane--78d55f7ddc--fqppr-eth0" May 17 00:15:25.347226 containerd[1479]: 2025-05-17 00:15:25.241 [INFO][4907] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b051fe0d064e85464022dd0cddb643a4fb5e4989b89ee311a26f2f2a5e8c57ac" HandleID="k8s-pod-network.b051fe0d064e85464022dd0cddb643a4fb5e4989b89ee311a26f2f2a5e8c57ac" Workload="ci--4081--3--3--n--16326e39d6-k8s-goldmane--78d55f7ddc--fqppr-eth0" May 17 00:15:25.347226 containerd[1479]: 2025-05-17 00:15:25.242 [INFO][4907] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b051fe0d064e85464022dd0cddb643a4fb5e4989b89ee311a26f2f2a5e8c57ac" HandleID="k8s-pod-network.b051fe0d064e85464022dd0cddb643a4fb5e4989b89ee311a26f2f2a5e8c57ac" Workload="ci--4081--3--3--n--16326e39d6-k8s-goldmane--78d55f7ddc--fqppr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000330150), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-16326e39d6", "pod":"goldmane-78d55f7ddc-fqppr", "timestamp":"2025-05-17 00:15:25.241409116 +0000 UTC"}, Hostname:"ci-4081-3-3-n-16326e39d6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:15:25.347226 containerd[1479]: 2025-05-17 00:15:25.242 [INFO][4907] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:25.347226 containerd[1479]: 2025-05-17 00:15:25.242 [INFO][4907] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:25.347226 containerd[1479]: 2025-05-17 00:15:25.242 [INFO][4907] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-16326e39d6' May 17 00:15:25.347226 containerd[1479]: 2025-05-17 00:15:25.259 [INFO][4907] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b051fe0d064e85464022dd0cddb643a4fb5e4989b89ee311a26f2f2a5e8c57ac" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:25.347226 containerd[1479]: 2025-05-17 00:15:25.269 [INFO][4907] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-16326e39d6" May 17 00:15:25.347226 containerd[1479]: 2025-05-17 00:15:25.277 [INFO][4907] ipam/ipam.go 511: Trying affinity for 192.168.81.128/26 host="ci-4081-3-3-n-16326e39d6" May 17 00:15:25.347226 containerd[1479]: 2025-05-17 00:15:25.280 [INFO][4907] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.128/26 host="ci-4081-3-3-n-16326e39d6" May 17 00:15:25.347226 containerd[1479]: 2025-05-17 00:15:25.283 [INFO][4907] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.128/26 host="ci-4081-3-3-n-16326e39d6" May 17 00:15:25.347226 containerd[1479]: 2025-05-17 00:15:25.283 [INFO][4907] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.128/26 handle="k8s-pod-network.b051fe0d064e85464022dd0cddb643a4fb5e4989b89ee311a26f2f2a5e8c57ac" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:25.347226 containerd[1479]: 2025-05-17 00:15:25.286 [INFO][4907] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b051fe0d064e85464022dd0cddb643a4fb5e4989b89ee311a26f2f2a5e8c57ac May 17 00:15:25.347226 containerd[1479]: 2025-05-17 00:15:25.292 [INFO][4907] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.128/26 handle="k8s-pod-network.b051fe0d064e85464022dd0cddb643a4fb5e4989b89ee311a26f2f2a5e8c57ac" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:25.347226 containerd[1479]: 2025-05-17 00:15:25.303 [INFO][4907] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.135/26] block=192.168.81.128/26 handle="k8s-pod-network.b051fe0d064e85464022dd0cddb643a4fb5e4989b89ee311a26f2f2a5e8c57ac" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:25.347226 containerd[1479]: 2025-05-17 00:15:25.303 [INFO][4907] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.135/26] handle="k8s-pod-network.b051fe0d064e85464022dd0cddb643a4fb5e4989b89ee311a26f2f2a5e8c57ac" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:25.347226 containerd[1479]: 2025-05-17 00:15:25.303 [INFO][4907] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:25.347226 containerd[1479]: 2025-05-17 00:15:25.303 [INFO][4907] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.135/26] IPv6=[] ContainerID="b051fe0d064e85464022dd0cddb643a4fb5e4989b89ee311a26f2f2a5e8c57ac" HandleID="k8s-pod-network.b051fe0d064e85464022dd0cddb643a4fb5e4989b89ee311a26f2f2a5e8c57ac" Workload="ci--4081--3--3--n--16326e39d6-k8s-goldmane--78d55f7ddc--fqppr-eth0" May 17 00:15:25.347813 containerd[1479]: 2025-05-17 00:15:25.308 [INFO][4881] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b051fe0d064e85464022dd0cddb643a4fb5e4989b89ee311a26f2f2a5e8c57ac" Namespace="calico-system" Pod="goldmane-78d55f7ddc-fqppr" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-goldmane--78d55f7ddc--fqppr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--16326e39d6-k8s-goldmane--78d55f7ddc--fqppr-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"a9b260fc-ff83-4de9-ac43-723c22c032c2", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-16326e39d6", ContainerID:"", Pod:"goldmane-78d55f7ddc-fqppr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.81.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4b72b095a19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:25.347813 containerd[1479]: 2025-05-17 00:15:25.309 [INFO][4881] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.135/32] ContainerID="b051fe0d064e85464022dd0cddb643a4fb5e4989b89ee311a26f2f2a5e8c57ac" Namespace="calico-system" Pod="goldmane-78d55f7ddc-fqppr" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-goldmane--78d55f7ddc--fqppr-eth0" May 17 00:15:25.347813 containerd[1479]: 2025-05-17 00:15:25.309 [INFO][4881] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b72b095a19 ContainerID="b051fe0d064e85464022dd0cddb643a4fb5e4989b89ee311a26f2f2a5e8c57ac" Namespace="calico-system" Pod="goldmane-78d55f7ddc-fqppr" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-goldmane--78d55f7ddc--fqppr-eth0" May 17 00:15:25.347813 containerd[1479]: 2025-05-17 00:15:25.318 [INFO][4881] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b051fe0d064e85464022dd0cddb643a4fb5e4989b89ee311a26f2f2a5e8c57ac" Namespace="calico-system" Pod="goldmane-78d55f7ddc-fqppr" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-goldmane--78d55f7ddc--fqppr-eth0" May 17 00:15:25.347813 containerd[1479]: 2025-05-17 00:15:25.321 [INFO][4881] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b051fe0d064e85464022dd0cddb643a4fb5e4989b89ee311a26f2f2a5e8c57ac" Namespace="calico-system" Pod="goldmane-78d55f7ddc-fqppr" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-goldmane--78d55f7ddc--fqppr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--16326e39d6-k8s-goldmane--78d55f7ddc--fqppr-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"a9b260fc-ff83-4de9-ac43-723c22c032c2", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-16326e39d6", ContainerID:"b051fe0d064e85464022dd0cddb643a4fb5e4989b89ee311a26f2f2a5e8c57ac", Pod:"goldmane-78d55f7ddc-fqppr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.81.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4b72b095a19", MAC:"42:5a:30:7d:f3:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:25.347813 containerd[1479]: 2025-05-17 00:15:25.342 [INFO][4881] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b051fe0d064e85464022dd0cddb643a4fb5e4989b89ee311a26f2f2a5e8c57ac" Namespace="calico-system" Pod="goldmane-78d55f7ddc-fqppr" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-goldmane--78d55f7ddc--fqppr-eth0" May 17 00:15:25.379406 containerd[1479]: time="2025-05-17T00:15:25.378553650Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:15:25.380361 containerd[1479]: time="2025-05-17T00:15:25.379517134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:15:25.380361 containerd[1479]: time="2025-05-17T00:15:25.379757695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:15:25.380361 containerd[1479]: time="2025-05-17T00:15:25.379900575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:15:25.420015 systemd[1]: Started cri-containerd-b051fe0d064e85464022dd0cddb643a4fb5e4989b89ee311a26f2f2a5e8c57ac.scope - libcontainer container b051fe0d064e85464022dd0cddb643a4fb5e4989b89ee311a26f2f2a5e8c57ac. May 17 00:15:25.447860 systemd-networkd[1369]: calib63bbd4e82a: Link UP May 17 00:15:25.451715 systemd-networkd[1369]: calib63bbd4e82a: Gained carrier May 17 00:15:25.476826 containerd[1479]: 2025-05-17 00:15:25.171 [INFO][4892] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--wpn2m-eth0 coredns-668d6bf9bc- kube-system a7794f9e-6b8e-4656-8525-16c2f94584b5 1002 0 2025-05-17 00:14:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-n-16326e39d6 coredns-668d6bf9bc-wpn2m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib63bbd4e82a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1b9ebce1d4f64c5bb542d7b4ebe5c1e46006752cbfc0131f80223653ddb20a5a" Namespace="kube-system" Pod="coredns-668d6bf9bc-wpn2m" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--wpn2m-" May 17 00:15:25.476826 containerd[1479]: 2025-05-17 00:15:25.171 [INFO][4892] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1b9ebce1d4f64c5bb542d7b4ebe5c1e46006752cbfc0131f80223653ddb20a5a" Namespace="kube-system" Pod="coredns-668d6bf9bc-wpn2m" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--wpn2m-eth0" May 17 00:15:25.476826 containerd[1479]: 2025-05-17 00:15:25.242 [INFO][4914] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b9ebce1d4f64c5bb542d7b4ebe5c1e46006752cbfc0131f80223653ddb20a5a" HandleID="k8s-pod-network.1b9ebce1d4f64c5bb542d7b4ebe5c1e46006752cbfc0131f80223653ddb20a5a" Workload="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--wpn2m-eth0" May 17 00:15:25.476826 containerd[1479]: 2025-05-17 00:15:25.245 [INFO][4914] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1b9ebce1d4f64c5bb542d7b4ebe5c1e46006752cbfc0131f80223653ddb20a5a" HandleID="k8s-pod-network.1b9ebce1d4f64c5bb542d7b4ebe5c1e46006752cbfc0131f80223653ddb20a5a" Workload="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--wpn2m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d560), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-n-16326e39d6", "pod":"coredns-668d6bf9bc-wpn2m", "timestamp":"2025-05-17 00:15:25.241809278 +0000 UTC"}, Hostname:"ci-4081-3-3-n-16326e39d6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:15:25.476826 containerd[1479]: 2025-05-17 00:15:25.245 [INFO][4914] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:25.476826 containerd[1479]: 2025-05-17 00:15:25.303 [INFO][4914] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:25.476826 containerd[1479]: 2025-05-17 00:15:25.304 [INFO][4914] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-16326e39d6' May 17 00:15:25.476826 containerd[1479]: 2025-05-17 00:15:25.360 [INFO][4914] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1b9ebce1d4f64c5bb542d7b4ebe5c1e46006752cbfc0131f80223653ddb20a5a" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:25.476826 containerd[1479]: 2025-05-17 00:15:25.371 [INFO][4914] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-16326e39d6" May 17 00:15:25.476826 containerd[1479]: 2025-05-17 00:15:25.382 [INFO][4914] ipam/ipam.go 511: Trying affinity for 192.168.81.128/26 host="ci-4081-3-3-n-16326e39d6" May 17 00:15:25.476826 containerd[1479]: 2025-05-17 00:15:25.386 [INFO][4914] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.128/26 host="ci-4081-3-3-n-16326e39d6" May 17 00:15:25.476826 containerd[1479]: 2025-05-17 00:15:25.393 [INFO][4914] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.128/26 host="ci-4081-3-3-n-16326e39d6" May 17 00:15:25.476826 containerd[1479]: 2025-05-17 00:15:25.393 [INFO][4914] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.128/26 handle="k8s-pod-network.1b9ebce1d4f64c5bb542d7b4ebe5c1e46006752cbfc0131f80223653ddb20a5a" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:25.476826 containerd[1479]: 2025-05-17 00:15:25.402 [INFO][4914] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1b9ebce1d4f64c5bb542d7b4ebe5c1e46006752cbfc0131f80223653ddb20a5a May 17 00:15:25.476826 containerd[1479]: 2025-05-17 00:15:25.423 [INFO][4914] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.128/26 handle="k8s-pod-network.1b9ebce1d4f64c5bb542d7b4ebe5c1e46006752cbfc0131f80223653ddb20a5a" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:25.476826 containerd[1479]: 2025-05-17 00:15:25.435 [INFO][4914] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.136/26] block=192.168.81.128/26 handle="k8s-pod-network.1b9ebce1d4f64c5bb542d7b4ebe5c1e46006752cbfc0131f80223653ddb20a5a" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:25.476826 containerd[1479]: 2025-05-17 00:15:25.435 [INFO][4914] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.136/26] handle="k8s-pod-network.1b9ebce1d4f64c5bb542d7b4ebe5c1e46006752cbfc0131f80223653ddb20a5a" host="ci-4081-3-3-n-16326e39d6" May 17 00:15:25.476826 containerd[1479]: 2025-05-17 00:15:25.435 [INFO][4914] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:25.476826 containerd[1479]: 2025-05-17 00:15:25.435 [INFO][4914] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.136/26] IPv6=[] ContainerID="1b9ebce1d4f64c5bb542d7b4ebe5c1e46006752cbfc0131f80223653ddb20a5a" HandleID="k8s-pod-network.1b9ebce1d4f64c5bb542d7b4ebe5c1e46006752cbfc0131f80223653ddb20a5a" Workload="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--wpn2m-eth0" May 17 00:15:25.478234 containerd[1479]: 2025-05-17 00:15:25.442 [INFO][4892] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1b9ebce1d4f64c5bb542d7b4ebe5c1e46006752cbfc0131f80223653ddb20a5a" Namespace="kube-system" Pod="coredns-668d6bf9bc-wpn2m" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--wpn2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--wpn2m-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a7794f9e-6b8e-4656-8525-16c2f94584b5", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-16326e39d6", ContainerID:"", Pod:"coredns-668d6bf9bc-wpn2m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib63bbd4e82a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:25.478234 containerd[1479]: 2025-05-17 00:15:25.443 [INFO][4892] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.136/32] ContainerID="1b9ebce1d4f64c5bb542d7b4ebe5c1e46006752cbfc0131f80223653ddb20a5a" Namespace="kube-system" Pod="coredns-668d6bf9bc-wpn2m" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--wpn2m-eth0" May 17 00:15:25.478234 containerd[1479]: 2025-05-17 00:15:25.443 [INFO][4892] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib63bbd4e82a ContainerID="1b9ebce1d4f64c5bb542d7b4ebe5c1e46006752cbfc0131f80223653ddb20a5a" Namespace="kube-system" Pod="coredns-668d6bf9bc-wpn2m" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--wpn2m-eth0" May 17 00:15:25.478234 containerd[1479]: 2025-05-17 00:15:25.453 [INFO][4892] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b9ebce1d4f64c5bb542d7b4ebe5c1e46006752cbfc0131f80223653ddb20a5a" Namespace="kube-system" Pod="coredns-668d6bf9bc-wpn2m" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--wpn2m-eth0" May 17 00:15:25.478234 containerd[1479]: 2025-05-17 00:15:25.456 [INFO][4892] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1b9ebce1d4f64c5bb542d7b4ebe5c1e46006752cbfc0131f80223653ddb20a5a" Namespace="kube-system" Pod="coredns-668d6bf9bc-wpn2m" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--wpn2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--wpn2m-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a7794f9e-6b8e-4656-8525-16c2f94584b5", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-16326e39d6", ContainerID:"1b9ebce1d4f64c5bb542d7b4ebe5c1e46006752cbfc0131f80223653ddb20a5a", Pod:"coredns-668d6bf9bc-wpn2m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib63bbd4e82a", MAC:"9a:ee:58:dc:23:5b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:25.478234 containerd[1479]: 2025-05-17 00:15:25.472 [INFO][4892] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1b9ebce1d4f64c5bb542d7b4ebe5c1e46006752cbfc0131f80223653ddb20a5a" Namespace="kube-system" Pod="coredns-668d6bf9bc-wpn2m" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--wpn2m-eth0" May 17 00:15:25.508524 containerd[1479]: time="2025-05-17T00:15:25.508341315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:15:25.508524 containerd[1479]: time="2025-05-17T00:15:25.508408116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:15:25.508524 containerd[1479]: time="2025-05-17T00:15:25.508462516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:15:25.509092 containerd[1479]: time="2025-05-17T00:15:25.508563636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:15:25.538971 containerd[1479]: time="2025-05-17T00:15:25.538843834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-fqppr,Uid:a9b260fc-ff83-4de9-ac43-723c22c032c2,Namespace:calico-system,Attempt:1,} returns sandbox id \"b051fe0d064e85464022dd0cddb643a4fb5e4989b89ee311a26f2f2a5e8c57ac\"" May 17 00:15:25.551062 systemd[1]: Started cri-containerd-1b9ebce1d4f64c5bb542d7b4ebe5c1e46006752cbfc0131f80223653ddb20a5a.scope - libcontainer container 1b9ebce1d4f64c5bb542d7b4ebe5c1e46006752cbfc0131f80223653ddb20a5a. May 17 00:15:25.602508 containerd[1479]: time="2025-05-17T00:15:25.601796839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wpn2m,Uid:a7794f9e-6b8e-4656-8525-16c2f94584b5,Namespace:kube-system,Attempt:1,} returns sandbox id \"1b9ebce1d4f64c5bb542d7b4ebe5c1e46006752cbfc0131f80223653ddb20a5a\"" May 17 00:15:25.607636 containerd[1479]: time="2025-05-17T00:15:25.607535421Z" level=info msg="CreateContainer within sandbox \"1b9ebce1d4f64c5bb542d7b4ebe5c1e46006752cbfc0131f80223653ddb20a5a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:15:25.628678 containerd[1479]: time="2025-05-17T00:15:25.627940461Z" level=info msg="CreateContainer within sandbox \"1b9ebce1d4f64c5bb542d7b4ebe5c1e46006752cbfc0131f80223653ddb20a5a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"54ad59a369d16e7a4a5ac8d55cda7d4270c3da0513e3dc7bc423fa8be0bfc030\"" May 17 00:15:25.629037 containerd[1479]: time="2025-05-17T00:15:25.628984385Z" level=info msg="StartContainer for \"54ad59a369d16e7a4a5ac8d55cda7d4270c3da0513e3dc7bc423fa8be0bfc030\"" May 17 00:15:25.662647 systemd[1]: Started cri-containerd-54ad59a369d16e7a4a5ac8d55cda7d4270c3da0513e3dc7bc423fa8be0bfc030.scope - libcontainer container 54ad59a369d16e7a4a5ac8d55cda7d4270c3da0513e3dc7bc423fa8be0bfc030. May 17 00:15:25.693679 containerd[1479]: time="2025-05-17T00:15:25.693608996Z" level=info msg="StartContainer for \"54ad59a369d16e7a4a5ac8d55cda7d4270c3da0513e3dc7bc423fa8be0bfc030\" returns successfully" May 17 00:15:26.088801 kubelet[2670]: I0517 00:15:26.088695 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:15:26.142959 kubelet[2670]: I0517 00:15:26.142890 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wpn2m" podStartSLOduration=45.142869416 podStartE2EDuration="45.142869416s" podCreationTimestamp="2025-05-17 00:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:15:26.109513328 +0000 UTC m=+50.485740378" watchObservedRunningTime="2025-05-17 00:15:26.142869416 +0000 UTC m=+50.519096466" May 17 00:15:26.180627 containerd[1479]: time="2025-05-17T00:15:26.179706757Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:26.182495 containerd[1479]: time="2025-05-17T00:15:26.182369727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8226240" May 17 00:15:26.187448 containerd[1479]: time="2025-05-17T00:15:26.185062218Z" level=info msg="ImageCreate event name:\"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:26.190522 containerd[1479]: time="2025-05-17T00:15:26.190449518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:26.194393 containerd[1479]: time="2025-05-17T00:15:26.194338973Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"9595481\" in 1.780820357s" May 17 00:15:26.194632 containerd[1479]: time="2025-05-17T00:15:26.194608174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\"" May 17 00:15:26.196728 containerd[1479]: time="2025-05-17T00:15:26.196691782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:15:26.200814 containerd[1479]: time="2025-05-17T00:15:26.200746838Z" level=info msg="CreateContainer within sandbox \"0f59d6a32b1795c619c86341b39083c431c0139c7b1909c984da53702643edd9\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 00:15:26.229703 containerd[1479]: time="2025-05-17T00:15:26.229566988Z" level=info msg="CreateContainer within sandbox \"0f59d6a32b1795c619c86341b39083c431c0139c7b1909c984da53702643edd9\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"01cc00b406a6654c4cfce5c8feb4e369ec38cda2b3121a4342cc0cc43b408ffd\"" May 17 00:15:26.233218 containerd[1479]: time="2025-05-17T00:15:26.231599076Z" level=info msg="StartContainer for \"01cc00b406a6654c4cfce5c8feb4e369ec38cda2b3121a4342cc0cc43b408ffd\"" May 17 00:15:26.281697 systemd[1]: Started cri-containerd-01cc00b406a6654c4cfce5c8feb4e369ec38cda2b3121a4342cc0cc43b408ffd.scope - libcontainer container 01cc00b406a6654c4cfce5c8feb4e369ec38cda2b3121a4342cc0cc43b408ffd. May 17 00:15:26.333163 containerd[1479]: time="2025-05-17T00:15:26.333105225Z" level=info msg="StartContainer for \"01cc00b406a6654c4cfce5c8feb4e369ec38cda2b3121a4342cc0cc43b408ffd\" returns successfully" May 17 00:15:26.597495 containerd[1479]: time="2025-05-17T00:15:26.597396277Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:26.600805 containerd[1479]: time="2025-05-17T00:15:26.600736610Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 17 00:15:26.603906 containerd[1479]: time="2025-05-17T00:15:26.603683101Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"45822470\" in 406.523357ms" May 17 00:15:26.603906 containerd[1479]: time="2025-05-17T00:15:26.603907262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\"" May 17 00:15:26.605951 containerd[1479]: time="2025-05-17T00:15:26.605900470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 17 00:15:26.607120 containerd[1479]: time="2025-05-17T00:15:26.607073194Z" level=info msg="CreateContainer within sandbox \"7cc6e6922ca134e7502e3d1a4f281ce1d42835300cbb5aa236449d4e4645c4ae\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:15:26.629916 containerd[1479]: time="2025-05-17T00:15:26.629847401Z" level=info msg="CreateContainer within sandbox \"7cc6e6922ca134e7502e3d1a4f281ce1d42835300cbb5aa236449d4e4645c4ae\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d3887f25fbc203ff61a8f26ba56ce4a1311e3b2f9cb7b31cbcfd560ef63bbbc0\"" May 17 00:15:26.631861 containerd[1479]: time="2025-05-17T00:15:26.631817009Z" level=info msg="StartContainer for \"d3887f25fbc203ff61a8f26ba56ce4a1311e3b2f9cb7b31cbcfd560ef63bbbc0\"" May 17 00:15:26.679870 systemd[1]: Started cri-containerd-d3887f25fbc203ff61a8f26ba56ce4a1311e3b2f9cb7b31cbcfd560ef63bbbc0.scope - libcontainer container d3887f25fbc203ff61a8f26ba56ce4a1311e3b2f9cb7b31cbcfd560ef63bbbc0. May 17 00:15:26.737112 containerd[1479]: time="2025-05-17T00:15:26.737057172Z" level=info msg="StartContainer for \"d3887f25fbc203ff61a8f26ba56ce4a1311e3b2f9cb7b31cbcfd560ef63bbbc0\" returns successfully" May 17 00:15:27.341606 systemd-networkd[1369]: cali4b72b095a19: Gained IPv6LL May 17 00:15:27.343573 systemd-networkd[1369]: calib63bbd4e82a: Gained IPv6LL May 17 00:15:28.098761 kubelet[2670]: I0517 00:15:28.098707 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:15:29.651511 containerd[1479]: time="2025-05-17T00:15:29.651371121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:29.653027 containerd[1479]: time="2025-05-17T00:15:29.652980687Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=48045219" May 17 00:15:29.654193 containerd[1479]: time="2025-05-17T00:15:29.654115691Z" level=info msg="ImageCreate event name:\"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:29.658256 containerd[1479]: time="2025-05-17T00:15:29.658168466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:29.659600 containerd[1479]: time="2025-05-17T00:15:29.659238229Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"49414428\" in 3.053281039s" May 17 00:15:29.659600 containerd[1479]: time="2025-05-17T00:15:29.659290710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\"" May 17 00:15:29.665599 containerd[1479]: time="2025-05-17T00:15:29.665529492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:15:29.683680 containerd[1479]: time="2025-05-17T00:15:29.683628559Z" level=info msg="CreateContainer within sandbox \"61bd9fa28d913b161bc4aa6136879719c72cd7b55de528afbf04ca387a9e7f49\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 17 00:15:29.728503 containerd[1479]: time="2025-05-17T00:15:29.728377242Z" level=info msg="CreateContainer within sandbox \"61bd9fa28d913b161bc4aa6136879719c72cd7b55de528afbf04ca387a9e7f49\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9eaf7e3337785830906619008defa9caefea9d3286055afa22f8124980cde8d9\"" May 17 00:15:29.731184 containerd[1479]: time="2025-05-17T00:15:29.730250449Z" level=info msg="StartContainer for \"9eaf7e3337785830906619008defa9caefea9d3286055afa22f8124980cde8d9\"" May 17 00:15:29.778705 systemd[1]: Started cri-containerd-9eaf7e3337785830906619008defa9caefea9d3286055afa22f8124980cde8d9.scope - libcontainer container 9eaf7e3337785830906619008defa9caefea9d3286055afa22f8124980cde8d9. May 17 00:15:29.826627 containerd[1479]: time="2025-05-17T00:15:29.826583681Z" level=info msg="StartContainer for \"9eaf7e3337785830906619008defa9caefea9d3286055afa22f8124980cde8d9\" returns successfully" May 17 00:15:29.924701 containerd[1479]: time="2025-05-17T00:15:29.924368598Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:15:29.927669 containerd[1479]: time="2025-05-17T00:15:29.927598010Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:15:29.928027 containerd[1479]: time="2025-05-17T00:15:29.927827051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:15:29.928563 kubelet[2670]: E0517 00:15:29.928220 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:15:29.928563 kubelet[2670]: E0517 00:15:29.928265 2670 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:15:29.929792 containerd[1479]: time="2025-05-17T00:15:29.929064176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 00:15:29.948799 kubelet[2670]: E0517 00:15:29.939737 2670 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vcg88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-fqppr_calico-system(a9b260fc-ff83-4de9-ac43-723c22c032c2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:15:29.951296 kubelet[2670]: E0517 00:15:29.949918 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fqppr" podUID="a9b260fc-ff83-4de9-ac43-723c22c032c2" May 17 00:15:30.111745 kubelet[2670]: E0517 00:15:30.111230 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fqppr" podUID="a9b260fc-ff83-4de9-ac43-723c22c032c2" May 17 00:15:30.148480 kubelet[2670]: I0517 00:15:30.148340 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7d6599b8b4-52gm9" podStartSLOduration=25.937889784 podStartE2EDuration="32.148321249s" podCreationTimestamp="2025-05-17 00:14:58 +0000 UTC" firstStartedPulling="2025-05-17 00:15:23.453379741 +0000 UTC m=+47.829606791" lastFinishedPulling="2025-05-17 00:15:29.663811166 +0000 UTC m=+54.040038256" observedRunningTime="2025-05-17 00:15:30.145360598 +0000 UTC m=+54.521587608" watchObservedRunningTime="2025-05-17 00:15:30.148321249 +0000 UTC m=+54.524548299" May 17 00:15:30.149564 kubelet[2670]: I0517 00:15:30.149502 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6cbf8c7948-hk4z6" podStartSLOduration=34.952649826 podStartE2EDuration="38.149485133s" podCreationTimestamp="2025-05-17 00:14:52 +0000 UTC" firstStartedPulling="2025-05-17 00:15:23.408179839 +0000 UTC m=+47.784406889" lastFinishedPulling="2025-05-17 00:15:26.605015146 +0000 UTC m=+50.981242196" observedRunningTime="2025-05-17 00:15:27.142847998 +0000 UTC m=+51.519075128" watchObservedRunningTime="2025-05-17 00:15:30.149485133 +0000 UTC m=+54.525712183" May 17 00:15:31.474737 containerd[1479]: time="2025-05-17T00:15:31.474539438Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:31.476839 containerd[1479]: time="2025-05-17T00:15:31.476792206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=13749925" May 17 00:15:31.478522 containerd[1479]: time="2025-05-17T00:15:31.477989570Z" level=info msg="ImageCreate event name:\"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:31.483064 containerd[1479]: time="2025-05-17T00:15:31.482524586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:31.486304 containerd[1479]: time="2025-05-17T00:15:31.486259959Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"15119118\" in 1.557159143s" May 17 00:15:31.486541 containerd[1479]: time="2025-05-17T00:15:31.486516280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\"" May 17 00:15:31.490320 containerd[1479]: time="2025-05-17T00:15:31.490283214Z" level=info msg="CreateContainer within sandbox \"0f59d6a32b1795c619c86341b39083c431c0139c7b1909c984da53702643edd9\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 00:15:31.509657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2506560764.mount: Deactivated successfully. May 17 00:15:31.512594 containerd[1479]: time="2025-05-17T00:15:31.512230931Z" level=info msg="CreateContainer within sandbox \"0f59d6a32b1795c619c86341b39083c431c0139c7b1909c984da53702643edd9\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"661b9879caf707a734049bb4724ef2999ab6e72d129135450df6d62b93ec21b4\"" May 17 00:15:31.515569 containerd[1479]: time="2025-05-17T00:15:31.514228459Z" level=info msg="StartContainer for \"661b9879caf707a734049bb4724ef2999ab6e72d129135450df6d62b93ec21b4\"" May 17 00:15:31.576663 systemd[1]: Started cri-containerd-661b9879caf707a734049bb4724ef2999ab6e72d129135450df6d62b93ec21b4.scope - libcontainer container 661b9879caf707a734049bb4724ef2999ab6e72d129135450df6d62b93ec21b4. May 17 00:15:31.620858 containerd[1479]: time="2025-05-17T00:15:31.620815357Z" level=info msg="StartContainer for \"661b9879caf707a734049bb4724ef2999ab6e72d129135450df6d62b93ec21b4\" returns successfully" May 17 00:15:31.755280 containerd[1479]: time="2025-05-17T00:15:31.754911752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:15:31.890169 kubelet[2670]: I0517 00:15:31.890102 2670 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 00:15:31.890169 kubelet[2670]: I0517 00:15:31.890174 2670 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 00:15:32.013013 containerd[1479]: time="2025-05-17T00:15:32.012594466Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:15:32.015153 containerd[1479]: time="2025-05-17T00:15:32.015067914Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:15:32.015496 containerd[1479]: time="2025-05-17T00:15:32.015123994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:15:32.015693 kubelet[2670]: E0517 00:15:32.015640 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:15:32.015857 kubelet[2670]: E0517 00:15:32.015701 2670 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:15:32.015857 kubelet[2670]: E0517 00:15:32.015805 2670 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ecfce7dcd79642e9a67dfb965e76b411,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nf456,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c64877bf5-xgbzp_calico-system(ea4a179c-2064-482e-bd61-eeafaaf1f680): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:15:32.019585 containerd[1479]: time="2025-05-17T00:15:32.019515010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:15:32.253080 containerd[1479]: time="2025-05-17T00:15:32.253015826Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:15:32.254759 containerd[1479]: time="2025-05-17T00:15:32.254649152Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:15:32.254903 containerd[1479]: time="2025-05-17T00:15:32.254710032Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:15:32.255064 kubelet[2670]: E0517 00:15:32.254996 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:15:32.256279 kubelet[2670]: E0517 00:15:32.255061 2670 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:15:32.256279 kubelet[2670]: E0517 00:15:32.255183 2670 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nf456,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c64877bf5-xgbzp_calico-system(ea4a179c-2064-482e-bd61-eeafaaf1f680): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:15:32.256584 kubelet[2670]: E0517 00:15:32.256380 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-c64877bf5-xgbzp" podUID="ea4a179c-2064-482e-bd61-eeafaaf1f680" May 17 00:15:35.773995 containerd[1479]: time="2025-05-17T00:15:35.773580951Z" level=info msg="StopPodSandbox for \"7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc\"" May 17 00:15:35.914771 containerd[1479]: 2025-05-17 00:15:35.840 [WARNING][5312] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--6pm57-eth0", GenerateName:"calico-apiserver-6cbf8c7948-", Namespace:"calico-apiserver", SelfLink:"", UID:"753684c6-cd41-4791-9ed7-725f4728c2a4", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cbf8c7948", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-16326e39d6", ContainerID:"fd491bd53711038e437cc1447d10887e0ae489d15c94fc55a6255dae964dd001", Pod:"calico-apiserver-6cbf8c7948-6pm57", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali97bc9f24b57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:35.914771 containerd[1479]: 2025-05-17 00:15:35.841 [INFO][5312] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" May 17 00:15:35.914771 containerd[1479]: 2025-05-17 00:15:35.841 [INFO][5312] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" iface="eth0" netns="" May 17 00:15:35.914771 containerd[1479]: 2025-05-17 00:15:35.841 [INFO][5312] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" May 17 00:15:35.914771 containerd[1479]: 2025-05-17 00:15:35.841 [INFO][5312] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" May 17 00:15:35.914771 containerd[1479]: 2025-05-17 00:15:35.886 [INFO][5319] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" HandleID="k8s-pod-network.7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--6pm57-eth0" May 17 00:15:35.914771 containerd[1479]: 2025-05-17 00:15:35.887 [INFO][5319] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:35.914771 containerd[1479]: 2025-05-17 00:15:35.887 [INFO][5319] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:35.914771 containerd[1479]: 2025-05-17 00:15:35.905 [WARNING][5319] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" HandleID="k8s-pod-network.7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--6pm57-eth0" May 17 00:15:35.914771 containerd[1479]: 2025-05-17 00:15:35.905 [INFO][5319] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" HandleID="k8s-pod-network.7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--6pm57-eth0" May 17 00:15:35.914771 containerd[1479]: 2025-05-17 00:15:35.908 [INFO][5319] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:35.914771 containerd[1479]: 2025-05-17 00:15:35.913 [INFO][5312] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" May 17 00:15:35.917104 containerd[1479]: time="2025-05-17T00:15:35.916536350Z" level=info msg="TearDown network for sandbox \"7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc\" successfully" May 17 00:15:35.917104 containerd[1479]: time="2025-05-17T00:15:35.916575310Z" level=info msg="StopPodSandbox for \"7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc\" returns successfully" May 17 00:15:35.917940 containerd[1479]: time="2025-05-17T00:15:35.917584834Z" level=info msg="RemovePodSandbox for \"7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc\"" May 17 00:15:35.917940 containerd[1479]: time="2025-05-17T00:15:35.917633754Z" level=info msg="Forcibly stopping sandbox \"7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc\"" May 17 00:15:36.016263 containerd[1479]: 2025-05-17 00:15:35.969 [WARNING][5334] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--6pm57-eth0", GenerateName:"calico-apiserver-6cbf8c7948-", Namespace:"calico-apiserver", SelfLink:"", UID:"753684c6-cd41-4791-9ed7-725f4728c2a4", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cbf8c7948", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-16326e39d6", ContainerID:"fd491bd53711038e437cc1447d10887e0ae489d15c94fc55a6255dae964dd001", Pod:"calico-apiserver-6cbf8c7948-6pm57", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali97bc9f24b57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:36.016263 containerd[1479]: 2025-05-17 00:15:35.970 [INFO][5334] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" May 17 00:15:36.016263 containerd[1479]: 2025-05-17 00:15:35.970 [INFO][5334] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" iface="eth0" netns="" May 17 00:15:36.016263 containerd[1479]: 2025-05-17 00:15:35.970 [INFO][5334] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" May 17 00:15:36.016263 containerd[1479]: 2025-05-17 00:15:35.970 [INFO][5334] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" May 17 00:15:36.016263 containerd[1479]: 2025-05-17 00:15:35.997 [INFO][5342] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" HandleID="k8s-pod-network.7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--6pm57-eth0" May 17 00:15:36.016263 containerd[1479]: 2025-05-17 00:15:35.997 [INFO][5342] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:36.016263 containerd[1479]: 2025-05-17 00:15:35.997 [INFO][5342] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:36.016263 containerd[1479]: 2025-05-17 00:15:36.008 [WARNING][5342] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" HandleID="k8s-pod-network.7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--6pm57-eth0" May 17 00:15:36.016263 containerd[1479]: 2025-05-17 00:15:36.009 [INFO][5342] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" HandleID="k8s-pod-network.7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--6pm57-eth0" May 17 00:15:36.016263 containerd[1479]: 2025-05-17 00:15:36.011 [INFO][5342] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:36.016263 containerd[1479]: 2025-05-17 00:15:36.014 [INFO][5334] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc" May 17 00:15:36.017510 containerd[1479]: time="2025-05-17T00:15:36.016922526Z" level=info msg="TearDown network for sandbox \"7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc\" successfully" May 17 00:15:36.021630 containerd[1479]: time="2025-05-17T00:15:36.021576581Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:15:36.021983 containerd[1479]: time="2025-05-17T00:15:36.021826222Z" level=info msg="RemovePodSandbox \"7d60180c4aec9b8c4dceff3978d7701133e0f03c00224dbe2bfe5cc62e984adc\" returns successfully" May 17 00:15:36.022667 containerd[1479]: time="2025-05-17T00:15:36.022632664Z" level=info msg="StopPodSandbox for \"b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7\"" May 17 00:15:36.167130 containerd[1479]: 2025-05-17 00:15:36.111 [WARNING][5356] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-whisker--66cc8f5467--bmsjq-eth0" May 17 00:15:36.167130 containerd[1479]: 2025-05-17 00:15:36.111 [INFO][5356] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" May 17 00:15:36.167130 containerd[1479]: 2025-05-17 00:15:36.111 [INFO][5356] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" iface="eth0" netns="" May 17 00:15:36.167130 containerd[1479]: 2025-05-17 00:15:36.111 [INFO][5356] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" May 17 00:15:36.167130 containerd[1479]: 2025-05-17 00:15:36.111 [INFO][5356] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" May 17 00:15:36.167130 containerd[1479]: 2025-05-17 00:15:36.146 [INFO][5365] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" HandleID="k8s-pod-network.b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" Workload="ci--4081--3--3--n--16326e39d6-k8s-whisker--66cc8f5467--bmsjq-eth0" May 17 00:15:36.167130 containerd[1479]: 2025-05-17 00:15:36.146 [INFO][5365] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:36.167130 containerd[1479]: 2025-05-17 00:15:36.146 [INFO][5365] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:36.167130 containerd[1479]: 2025-05-17 00:15:36.160 [WARNING][5365] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" HandleID="k8s-pod-network.b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" Workload="ci--4081--3--3--n--16326e39d6-k8s-whisker--66cc8f5467--bmsjq-eth0" May 17 00:15:36.167130 containerd[1479]: 2025-05-17 00:15:36.160 [INFO][5365] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" HandleID="k8s-pod-network.b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" Workload="ci--4081--3--3--n--16326e39d6-k8s-whisker--66cc8f5467--bmsjq-eth0" May 17 00:15:36.167130 containerd[1479]: 2025-05-17 00:15:36.162 [INFO][5365] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:36.167130 containerd[1479]: 2025-05-17 00:15:36.164 [INFO][5356] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" May 17 00:15:36.167130 containerd[1479]: time="2025-05-17T00:15:36.167087622Z" level=info msg="TearDown network for sandbox \"b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7\" successfully" May 17 00:15:36.167130 containerd[1479]: time="2025-05-17T00:15:36.167127142Z" level=info msg="StopPodSandbox for \"b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7\" returns successfully" May 17 00:15:36.171837 containerd[1479]: time="2025-05-17T00:15:36.171775077Z" level=info msg="RemovePodSandbox for \"b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7\"" May 17 00:15:36.172317 containerd[1479]: time="2025-05-17T00:15:36.171983838Z" level=info msg="Forcibly stopping sandbox \"b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7\"" May 17 00:15:36.321602 containerd[1479]: 2025-05-17 00:15:36.230 [WARNING][5379] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" WorkloadEndpoint="ci--4081--3--3--n--16326e39d6-k8s-whisker--66cc8f5467--bmsjq-eth0" May 17 00:15:36.321602 containerd[1479]: 2025-05-17 00:15:36.230 [INFO][5379] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" May 17 00:15:36.321602 containerd[1479]: 2025-05-17 00:15:36.230 [INFO][5379] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" iface="eth0" netns="" May 17 00:15:36.321602 containerd[1479]: 2025-05-17 00:15:36.230 [INFO][5379] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" May 17 00:15:36.321602 containerd[1479]: 2025-05-17 00:15:36.230 [INFO][5379] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" May 17 00:15:36.321602 containerd[1479]: 2025-05-17 00:15:36.279 [INFO][5386] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" HandleID="k8s-pod-network.b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" Workload="ci--4081--3--3--n--16326e39d6-k8s-whisker--66cc8f5467--bmsjq-eth0" May 17 00:15:36.321602 containerd[1479]: 2025-05-17 00:15:36.280 [INFO][5386] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:36.321602 containerd[1479]: 2025-05-17 00:15:36.280 [INFO][5386] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:36.321602 containerd[1479]: 2025-05-17 00:15:36.315 [WARNING][5386] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" HandleID="k8s-pod-network.b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" Workload="ci--4081--3--3--n--16326e39d6-k8s-whisker--66cc8f5467--bmsjq-eth0" May 17 00:15:36.321602 containerd[1479]: 2025-05-17 00:15:36.315 [INFO][5386] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" HandleID="k8s-pod-network.b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" Workload="ci--4081--3--3--n--16326e39d6-k8s-whisker--66cc8f5467--bmsjq-eth0" May 17 00:15:36.321602 containerd[1479]: 2025-05-17 00:15:36.318 [INFO][5386] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:36.321602 containerd[1479]: 2025-05-17 00:15:36.319 [INFO][5379] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7" May 17 00:15:36.321991 containerd[1479]: time="2025-05-17T00:15:36.321643853Z" level=info msg="TearDown network for sandbox \"b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7\" successfully" May 17 00:15:36.335577 containerd[1479]: time="2025-05-17T00:15:36.335234018Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:15:36.335577 containerd[1479]: time="2025-05-17T00:15:36.335313138Z" level=info msg="RemovePodSandbox \"b1d44cb7b0f8415b35ce49657bf37c2e1caa0957ed30a7b0f07cb637123005e7\" returns successfully" May 17 00:15:36.337775 containerd[1479]: time="2025-05-17T00:15:36.337733306Z" level=info msg="StopPodSandbox for \"d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e\"" May 17 00:15:36.461807 containerd[1479]: 2025-05-17 00:15:36.420 [WARNING][5400] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--wpn2m-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a7794f9e-6b8e-4656-8525-16c2f94584b5", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-16326e39d6", ContainerID:"1b9ebce1d4f64c5bb542d7b4ebe5c1e46006752cbfc0131f80223653ddb20a5a", Pod:"coredns-668d6bf9bc-wpn2m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib63bbd4e82a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:36.461807 containerd[1479]: 2025-05-17 00:15:36.420 [INFO][5400] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" May 17 00:15:36.461807 containerd[1479]: 2025-05-17 00:15:36.421 [INFO][5400] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" iface="eth0" netns="" May 17 00:15:36.461807 containerd[1479]: 2025-05-17 00:15:36.421 [INFO][5400] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" May 17 00:15:36.461807 containerd[1479]: 2025-05-17 00:15:36.421 [INFO][5400] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" May 17 00:15:36.461807 containerd[1479]: 2025-05-17 00:15:36.446 [INFO][5407] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" HandleID="k8s-pod-network.d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" Workload="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--wpn2m-eth0" May 17 00:15:36.461807 containerd[1479]: 2025-05-17 00:15:36.446 [INFO][5407] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:36.461807 containerd[1479]: 2025-05-17 00:15:36.446 [INFO][5407] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:36.461807 containerd[1479]: 2025-05-17 00:15:36.456 [WARNING][5407] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" HandleID="k8s-pod-network.d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" Workload="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--wpn2m-eth0" May 17 00:15:36.461807 containerd[1479]: 2025-05-17 00:15:36.456 [INFO][5407] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" HandleID="k8s-pod-network.d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" Workload="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--wpn2m-eth0" May 17 00:15:36.461807 containerd[1479]: 2025-05-17 00:15:36.458 [INFO][5407] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:36.461807 containerd[1479]: 2025-05-17 00:15:36.459 [INFO][5400] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" May 17 00:15:36.461807 containerd[1479]: time="2025-05-17T00:15:36.461306714Z" level=info msg="TearDown network for sandbox \"d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e\" successfully" May 17 00:15:36.461807 containerd[1479]: time="2025-05-17T00:15:36.461333914Z" level=info msg="StopPodSandbox for \"d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e\" returns successfully" May 17 00:15:36.462253 containerd[1479]: time="2025-05-17T00:15:36.461862036Z" level=info msg="RemovePodSandbox for \"d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e\"" May 17 00:15:36.462253 containerd[1479]: time="2025-05-17T00:15:36.461894916Z" level=info msg="Forcibly stopping sandbox \"d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e\"" May 17 00:15:36.556347 containerd[1479]: 2025-05-17 00:15:36.508 [WARNING][5422] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--wpn2m-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a7794f9e-6b8e-4656-8525-16c2f94584b5", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-16326e39d6", ContainerID:"1b9ebce1d4f64c5bb542d7b4ebe5c1e46006752cbfc0131f80223653ddb20a5a", Pod:"coredns-668d6bf9bc-wpn2m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib63bbd4e82a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:36.556347 containerd[1479]: 2025-05-17 00:15:36.509 [INFO][5422] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" May 17 00:15:36.556347 containerd[1479]: 2025-05-17 00:15:36.509 [INFO][5422] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" iface="eth0" netns="" May 17 00:15:36.556347 containerd[1479]: 2025-05-17 00:15:36.509 [INFO][5422] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" May 17 00:15:36.556347 containerd[1479]: 2025-05-17 00:15:36.509 [INFO][5422] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" May 17 00:15:36.556347 containerd[1479]: 2025-05-17 00:15:36.535 [INFO][5429] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" HandleID="k8s-pod-network.d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" Workload="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--wpn2m-eth0" May 17 00:15:36.556347 containerd[1479]: 2025-05-17 00:15:36.536 [INFO][5429] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:36.556347 containerd[1479]: 2025-05-17 00:15:36.536 [INFO][5429] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:36.556347 containerd[1479]: 2025-05-17 00:15:36.549 [WARNING][5429] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" HandleID="k8s-pod-network.d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" Workload="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--wpn2m-eth0" May 17 00:15:36.556347 containerd[1479]: 2025-05-17 00:15:36.549 [INFO][5429] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" HandleID="k8s-pod-network.d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" Workload="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--wpn2m-eth0" May 17 00:15:36.556347 containerd[1479]: 2025-05-17 00:15:36.552 [INFO][5429] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:36.556347 containerd[1479]: 2025-05-17 00:15:36.553 [INFO][5422] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e" May 17 00:15:36.556347 containerd[1479]: time="2025-05-17T00:15:36.555862667Z" level=info msg="TearDown network for sandbox \"d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e\" successfully" May 17 00:15:36.561894 containerd[1479]: time="2025-05-17T00:15:36.561769526Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:15:36.562064 containerd[1479]: time="2025-05-17T00:15:36.561912087Z" level=info msg="RemovePodSandbox \"d6ee3cf3c0bb435fb182b801f62a700390d3c108bd9f34a31fb8c6281effdb1e\" returns successfully" May 17 00:15:36.563062 containerd[1479]: time="2025-05-17T00:15:36.563008330Z" level=info msg="StopPodSandbox for \"d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9\"" May 17 00:15:36.670574 containerd[1479]: 2025-05-17 00:15:36.616 [WARNING][5443] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--16326e39d6-k8s-csi--node--driver--fhb48-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bf41328e-d1ed-475d-9a4a-c70bc9451b6f", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-16326e39d6", ContainerID:"0f59d6a32b1795c619c86341b39083c431c0139c7b1909c984da53702643edd9", Pod:"csi-node-driver-fhb48", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.81.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliee3a0fc1d4d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:36.670574 containerd[1479]: 2025-05-17 00:15:36.616 [INFO][5443] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" May 17 00:15:36.670574 containerd[1479]: 2025-05-17 00:15:36.616 [INFO][5443] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" iface="eth0" netns="" May 17 00:15:36.670574 containerd[1479]: 2025-05-17 00:15:36.616 [INFO][5443] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" May 17 00:15:36.670574 containerd[1479]: 2025-05-17 00:15:36.616 [INFO][5443] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" May 17 00:15:36.670574 containerd[1479]: 2025-05-17 00:15:36.648 [INFO][5450] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" HandleID="k8s-pod-network.d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" Workload="ci--4081--3--3--n--16326e39d6-k8s-csi--node--driver--fhb48-eth0" May 17 00:15:36.670574 containerd[1479]: 2025-05-17 00:15:36.648 [INFO][5450] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:36.670574 containerd[1479]: 2025-05-17 00:15:36.648 [INFO][5450] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:36.670574 containerd[1479]: 2025-05-17 00:15:36.663 [WARNING][5450] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" HandleID="k8s-pod-network.d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" Workload="ci--4081--3--3--n--16326e39d6-k8s-csi--node--driver--fhb48-eth0" May 17 00:15:36.670574 containerd[1479]: 2025-05-17 00:15:36.663 [INFO][5450] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" HandleID="k8s-pod-network.d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" Workload="ci--4081--3--3--n--16326e39d6-k8s-csi--node--driver--fhb48-eth0" May 17 00:15:36.670574 containerd[1479]: 2025-05-17 00:15:36.665 [INFO][5450] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:36.670574 containerd[1479]: 2025-05-17 00:15:36.667 [INFO][5443] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" May 17 00:15:36.671347 containerd[1479]: time="2025-05-17T00:15:36.670685966Z" level=info msg="TearDown network for sandbox \"d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9\" successfully" May 17 00:15:36.671347 containerd[1479]: time="2025-05-17T00:15:36.670719126Z" level=info msg="StopPodSandbox for \"d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9\" returns successfully" May 17 00:15:36.671347 containerd[1479]: time="2025-05-17T00:15:36.671231848Z" level=info msg="RemovePodSandbox for \"d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9\"" May 17 00:15:36.671347 containerd[1479]: time="2025-05-17T00:15:36.671265248Z" level=info msg="Forcibly stopping sandbox \"d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9\"" May 17 00:15:36.771138 containerd[1479]: 2025-05-17 00:15:36.719 [WARNING][5464] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--16326e39d6-k8s-csi--node--driver--fhb48-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bf41328e-d1ed-475d-9a4a-c70bc9451b6f", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-16326e39d6", ContainerID:"0f59d6a32b1795c619c86341b39083c431c0139c7b1909c984da53702643edd9", Pod:"csi-node-driver-fhb48", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.81.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliee3a0fc1d4d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:36.771138 containerd[1479]: 2025-05-17 00:15:36.720 [INFO][5464] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" May 17 00:15:36.771138 containerd[1479]: 2025-05-17 00:15:36.720 [INFO][5464] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" iface="eth0" netns="" May 17 00:15:36.771138 containerd[1479]: 2025-05-17 00:15:36.720 [INFO][5464] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" May 17 00:15:36.771138 containerd[1479]: 2025-05-17 00:15:36.720 [INFO][5464] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" May 17 00:15:36.771138 containerd[1479]: 2025-05-17 00:15:36.752 [INFO][5471] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" HandleID="k8s-pod-network.d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" Workload="ci--4081--3--3--n--16326e39d6-k8s-csi--node--driver--fhb48-eth0" May 17 00:15:36.771138 containerd[1479]: 2025-05-17 00:15:36.752 [INFO][5471] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:36.771138 containerd[1479]: 2025-05-17 00:15:36.753 [INFO][5471] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:36.771138 containerd[1479]: 2025-05-17 00:15:36.764 [WARNING][5471] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" HandleID="k8s-pod-network.d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" Workload="ci--4081--3--3--n--16326e39d6-k8s-csi--node--driver--fhb48-eth0" May 17 00:15:36.771138 containerd[1479]: 2025-05-17 00:15:36.765 [INFO][5471] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" HandleID="k8s-pod-network.d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" Workload="ci--4081--3--3--n--16326e39d6-k8s-csi--node--driver--fhb48-eth0" May 17 00:15:36.771138 containerd[1479]: 2025-05-17 00:15:36.767 [INFO][5471] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:36.771138 containerd[1479]: 2025-05-17 00:15:36.769 [INFO][5464] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9" May 17 00:15:36.773810 containerd[1479]: time="2025-05-17T00:15:36.771190738Z" level=info msg="TearDown network for sandbox \"d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9\" successfully" May 17 00:15:36.776107 containerd[1479]: time="2025-05-17T00:15:36.775911074Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:15:36.776107 containerd[1479]: time="2025-05-17T00:15:36.775993274Z" level=info msg="RemovePodSandbox \"d251f229c257c70095e861e756d911bcce40108ac5970cf7024f69fcafa22ab9\" returns successfully" May 17 00:15:36.777414 containerd[1479]: time="2025-05-17T00:15:36.776962158Z" level=info msg="StopPodSandbox for \"86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418\"" May 17 00:15:36.884773 containerd[1479]: 2025-05-17 00:15:36.841 [WARNING][5485] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--hk4z6-eth0", GenerateName:"calico-apiserver-6cbf8c7948-", Namespace:"calico-apiserver", SelfLink:"", UID:"ceb2f628-4f33-47aa-8305-d46713261d40", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cbf8c7948", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-16326e39d6", ContainerID:"7cc6e6922ca134e7502e3d1a4f281ce1d42835300cbb5aa236449d4e4645c4ae", Pod:"calico-apiserver-6cbf8c7948-hk4z6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1017b69ae60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:36.884773 containerd[1479]: 2025-05-17 00:15:36.842 [INFO][5485] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" May 17 00:15:36.884773 containerd[1479]: 2025-05-17 00:15:36.842 [INFO][5485] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" iface="eth0" netns="" May 17 00:15:36.884773 containerd[1479]: 2025-05-17 00:15:36.842 [INFO][5485] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" May 17 00:15:36.884773 containerd[1479]: 2025-05-17 00:15:36.842 [INFO][5485] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" May 17 00:15:36.884773 containerd[1479]: 2025-05-17 00:15:36.866 [INFO][5498] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" HandleID="k8s-pod-network.86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--hk4z6-eth0" May 17 00:15:36.884773 containerd[1479]: 2025-05-17 00:15:36.866 [INFO][5498] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:36.884773 containerd[1479]: 2025-05-17 00:15:36.866 [INFO][5498] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:36.884773 containerd[1479]: 2025-05-17 00:15:36.877 [WARNING][5498] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" HandleID="k8s-pod-network.86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--hk4z6-eth0" May 17 00:15:36.884773 containerd[1479]: 2025-05-17 00:15:36.877 [INFO][5498] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" HandleID="k8s-pod-network.86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--hk4z6-eth0" May 17 00:15:36.884773 containerd[1479]: 2025-05-17 00:15:36.879 [INFO][5498] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:36.884773 containerd[1479]: 2025-05-17 00:15:36.882 [INFO][5485] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" May 17 00:15:36.885823 containerd[1479]: time="2025-05-17T00:15:36.885548396Z" level=info msg="TearDown network for sandbox \"86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418\" successfully" May 17 00:15:36.885823 containerd[1479]: time="2025-05-17T00:15:36.885590557Z" level=info msg="StopPodSandbox for \"86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418\" returns successfully" May 17 00:15:36.886188 containerd[1479]: time="2025-05-17T00:15:36.886142918Z" level=info msg="RemovePodSandbox for \"86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418\"" May 17 00:15:36.886188 containerd[1479]: time="2025-05-17T00:15:36.886185598Z" level=info msg="Forcibly stopping sandbox \"86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418\"" May 17 00:15:37.003129 containerd[1479]: 2025-05-17 00:15:36.941 [WARNING][5512] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--hk4z6-eth0", GenerateName:"calico-apiserver-6cbf8c7948-", Namespace:"calico-apiserver", SelfLink:"", UID:"ceb2f628-4f33-47aa-8305-d46713261d40", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cbf8c7948", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-16326e39d6", ContainerID:"7cc6e6922ca134e7502e3d1a4f281ce1d42835300cbb5aa236449d4e4645c4ae", Pod:"calico-apiserver-6cbf8c7948-hk4z6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1017b69ae60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:37.003129 containerd[1479]: 2025-05-17 00:15:36.942 [INFO][5512] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" May 17 00:15:37.003129 containerd[1479]: 2025-05-17 00:15:36.943 [INFO][5512] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" iface="eth0" netns="" May 17 00:15:37.003129 containerd[1479]: 2025-05-17 00:15:36.943 [INFO][5512] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" May 17 00:15:37.003129 containerd[1479]: 2025-05-17 00:15:36.943 [INFO][5512] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" May 17 00:15:37.003129 containerd[1479]: 2025-05-17 00:15:36.984 [INFO][5519] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" HandleID="k8s-pod-network.86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--hk4z6-eth0" May 17 00:15:37.003129 containerd[1479]: 2025-05-17 00:15:36.984 [INFO][5519] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:37.003129 containerd[1479]: 2025-05-17 00:15:36.985 [INFO][5519] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:37.003129 containerd[1479]: 2025-05-17 00:15:36.995 [WARNING][5519] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" HandleID="k8s-pod-network.86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--hk4z6-eth0" May 17 00:15:37.003129 containerd[1479]: 2025-05-17 00:15:36.995 [INFO][5519] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" HandleID="k8s-pod-network.86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--apiserver--6cbf8c7948--hk4z6-eth0" May 17 00:15:37.003129 containerd[1479]: 2025-05-17 00:15:36.998 [INFO][5519] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:37.003129 containerd[1479]: 2025-05-17 00:15:37.000 [INFO][5512] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418" May 17 00:15:37.003129 containerd[1479]: time="2025-05-17T00:15:37.003003825Z" level=info msg="TearDown network for sandbox \"86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418\" successfully" May 17 00:15:37.012781 containerd[1479]: time="2025-05-17T00:15:37.011556732Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:15:37.012781 containerd[1479]: time="2025-05-17T00:15:37.011832493Z" level=info msg="RemovePodSandbox \"86e052553ff0ac7699112bb45ed6da24f420b7a1edf650326c230579538e6418\" returns successfully" May 17 00:15:37.013544 containerd[1479]: time="2025-05-17T00:15:37.013494539Z" level=info msg="StopPodSandbox for \"e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c\"" May 17 00:15:37.156951 containerd[1479]: 2025-05-17 00:15:37.077 [WARNING][5533] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--16326e39d6-k8s-goldmane--78d55f7ddc--fqppr-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"a9b260fc-ff83-4de9-ac43-723c22c032c2", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-16326e39d6", ContainerID:"b051fe0d064e85464022dd0cddb643a4fb5e4989b89ee311a26f2f2a5e8c57ac", Pod:"goldmane-78d55f7ddc-fqppr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.81.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4b72b095a19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:37.156951 containerd[1479]: 2025-05-17 00:15:37.078 [INFO][5533] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" May 17 00:15:37.156951 containerd[1479]: 2025-05-17 00:15:37.078 [INFO][5533] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" iface="eth0" netns="" May 17 00:15:37.156951 containerd[1479]: 2025-05-17 00:15:37.078 [INFO][5533] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" May 17 00:15:37.156951 containerd[1479]: 2025-05-17 00:15:37.078 [INFO][5533] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" May 17 00:15:37.156951 containerd[1479]: 2025-05-17 00:15:37.120 [INFO][5540] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" HandleID="k8s-pod-network.e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" Workload="ci--4081--3--3--n--16326e39d6-k8s-goldmane--78d55f7ddc--fqppr-eth0" May 17 00:15:37.156951 containerd[1479]: 2025-05-17 00:15:37.120 [INFO][5540] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:37.156951 containerd[1479]: 2025-05-17 00:15:37.120 [INFO][5540] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:37.156951 containerd[1479]: 2025-05-17 00:15:37.140 [WARNING][5540] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" HandleID="k8s-pod-network.e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" Workload="ci--4081--3--3--n--16326e39d6-k8s-goldmane--78d55f7ddc--fqppr-eth0" May 17 00:15:37.156951 containerd[1479]: 2025-05-17 00:15:37.140 [INFO][5540] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" HandleID="k8s-pod-network.e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" Workload="ci--4081--3--3--n--16326e39d6-k8s-goldmane--78d55f7ddc--fqppr-eth0" May 17 00:15:37.156951 containerd[1479]: 2025-05-17 00:15:37.146 [INFO][5540] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:37.156951 containerd[1479]: 2025-05-17 00:15:37.153 [INFO][5533] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" May 17 00:15:37.159492 containerd[1479]: time="2025-05-17T00:15:37.156931926Z" level=info msg="TearDown network for sandbox \"e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c\" successfully" May 17 00:15:37.159492 containerd[1479]: time="2025-05-17T00:15:37.157833569Z" level=info msg="StopPodSandbox for \"e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c\" returns successfully" May 17 00:15:37.159492 containerd[1479]: time="2025-05-17T00:15:37.159009173Z" level=info msg="RemovePodSandbox for \"e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c\"" May 17 00:15:37.159492 containerd[1479]: time="2025-05-17T00:15:37.159092534Z" level=info msg="Forcibly stopping sandbox \"e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c\"" May 17 00:15:37.288345 containerd[1479]: 2025-05-17 00:15:37.235 [WARNING][5555] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--16326e39d6-k8s-goldmane--78d55f7ddc--fqppr-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"a9b260fc-ff83-4de9-ac43-723c22c032c2", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-16326e39d6", ContainerID:"b051fe0d064e85464022dd0cddb643a4fb5e4989b89ee311a26f2f2a5e8c57ac", Pod:"goldmane-78d55f7ddc-fqppr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.81.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4b72b095a19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:37.288345 containerd[1479]: 2025-05-17 00:15:37.236 [INFO][5555] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" May 17 00:15:37.288345 containerd[1479]: 2025-05-17 00:15:37.236 [INFO][5555] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" iface="eth0" netns="" May 17 00:15:37.288345 containerd[1479]: 2025-05-17 00:15:37.236 [INFO][5555] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" May 17 00:15:37.288345 containerd[1479]: 2025-05-17 00:15:37.236 [INFO][5555] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" May 17 00:15:37.288345 containerd[1479]: 2025-05-17 00:15:37.266 [INFO][5563] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" HandleID="k8s-pod-network.e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" Workload="ci--4081--3--3--n--16326e39d6-k8s-goldmane--78d55f7ddc--fqppr-eth0" May 17 00:15:37.288345 containerd[1479]: 2025-05-17 00:15:37.266 [INFO][5563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:37.288345 containerd[1479]: 2025-05-17 00:15:37.266 [INFO][5563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:37.288345 containerd[1479]: 2025-05-17 00:15:37.280 [WARNING][5563] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" HandleID="k8s-pod-network.e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" Workload="ci--4081--3--3--n--16326e39d6-k8s-goldmane--78d55f7ddc--fqppr-eth0" May 17 00:15:37.288345 containerd[1479]: 2025-05-17 00:15:37.280 [INFO][5563] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" HandleID="k8s-pod-network.e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" Workload="ci--4081--3--3--n--16326e39d6-k8s-goldmane--78d55f7ddc--fqppr-eth0" May 17 00:15:37.288345 containerd[1479]: 2025-05-17 00:15:37.284 [INFO][5563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:37.288345 containerd[1479]: 2025-05-17 00:15:37.286 [INFO][5555] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c" May 17 00:15:37.289281 containerd[1479]: time="2025-05-17T00:15:37.288419475Z" level=info msg="TearDown network for sandbox \"e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c\" successfully" May 17 00:15:37.293899 containerd[1479]: time="2025-05-17T00:15:37.293786613Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:15:37.294061 containerd[1479]: time="2025-05-17T00:15:37.293917493Z" level=info msg="RemovePodSandbox \"e92ed2d7510927885d11a6b83411292c7ac81114fe8b349ff6d28a3f59a3a94c\" returns successfully" May 17 00:15:37.295417 containerd[1479]: time="2025-05-17T00:15:37.295022977Z" level=info msg="StopPodSandbox for \"648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9\"" May 17 00:15:37.403637 containerd[1479]: 2025-05-17 00:15:37.347 [WARNING][5577] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--16326e39d6-k8s-calico--kube--controllers--7d6599b8b4--52gm9-eth0", GenerateName:"calico-kube-controllers-7d6599b8b4-", Namespace:"calico-system", SelfLink:"", UID:"8165c08e-7c9f-40c3-8125-d662038241a2", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d6599b8b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-16326e39d6", ContainerID:"61bd9fa28d913b161bc4aa6136879719c72cd7b55de528afbf04ca387a9e7f49", Pod:"calico-kube-controllers-7d6599b8b4-52gm9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.81.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif6a17f2ee49", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:37.403637 containerd[1479]: 2025-05-17 00:15:37.348 [INFO][5577] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" May 17 00:15:37.403637 containerd[1479]: 2025-05-17 00:15:37.348 [INFO][5577] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" iface="eth0" netns="" May 17 00:15:37.403637 containerd[1479]: 2025-05-17 00:15:37.348 [INFO][5577] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" May 17 00:15:37.403637 containerd[1479]: 2025-05-17 00:15:37.348 [INFO][5577] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" May 17 00:15:37.403637 containerd[1479]: 2025-05-17 00:15:37.379 [INFO][5584] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" HandleID="k8s-pod-network.648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--kube--controllers--7d6599b8b4--52gm9-eth0" May 17 00:15:37.403637 containerd[1479]: 2025-05-17 00:15:37.379 [INFO][5584] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:37.403637 containerd[1479]: 2025-05-17 00:15:37.379 [INFO][5584] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:37.403637 containerd[1479]: 2025-05-17 00:15:37.395 [WARNING][5584] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" HandleID="k8s-pod-network.648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--kube--controllers--7d6599b8b4--52gm9-eth0" May 17 00:15:37.403637 containerd[1479]: 2025-05-17 00:15:37.396 [INFO][5584] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" HandleID="k8s-pod-network.648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--kube--controllers--7d6599b8b4--52gm9-eth0" May 17 00:15:37.403637 containerd[1479]: 2025-05-17 00:15:37.399 [INFO][5584] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:37.403637 containerd[1479]: 2025-05-17 00:15:37.401 [INFO][5577] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" May 17 00:15:37.404201 containerd[1479]: time="2025-05-17T00:15:37.403664811Z" level=info msg="TearDown network for sandbox \"648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9\" successfully" May 17 00:15:37.404201 containerd[1479]: time="2025-05-17T00:15:37.403696971Z" level=info msg="StopPodSandbox for \"648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9\" returns successfully" May 17 00:15:37.406110 containerd[1479]: time="2025-05-17T00:15:37.406052299Z" level=info msg="RemovePodSandbox for \"648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9\"" May 17 00:15:37.406110 containerd[1479]: time="2025-05-17T00:15:37.406111179Z" level=info msg="Forcibly stopping sandbox \"648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9\"" May 17 00:15:37.528522 containerd[1479]: 2025-05-17 00:15:37.464 [WARNING][5598] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--16326e39d6-k8s-calico--kube--controllers--7d6599b8b4--52gm9-eth0", GenerateName:"calico-kube-controllers-7d6599b8b4-", Namespace:"calico-system", SelfLink:"", UID:"8165c08e-7c9f-40c3-8125-d662038241a2", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d6599b8b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-16326e39d6", ContainerID:"61bd9fa28d913b161bc4aa6136879719c72cd7b55de528afbf04ca387a9e7f49", Pod:"calico-kube-controllers-7d6599b8b4-52gm9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.81.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif6a17f2ee49", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:37.528522 containerd[1479]: 2025-05-17 00:15:37.464 [INFO][5598] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" May 17 00:15:37.528522 containerd[1479]: 2025-05-17 00:15:37.465 [INFO][5598] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" iface="eth0" netns="" May 17 00:15:37.528522 containerd[1479]: 2025-05-17 00:15:37.465 [INFO][5598] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" May 17 00:15:37.528522 containerd[1479]: 2025-05-17 00:15:37.465 [INFO][5598] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" May 17 00:15:37.528522 containerd[1479]: 2025-05-17 00:15:37.503 [INFO][5606] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" HandleID="k8s-pod-network.648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--kube--controllers--7d6599b8b4--52gm9-eth0" May 17 00:15:37.528522 containerd[1479]: 2025-05-17 00:15:37.504 [INFO][5606] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:37.528522 containerd[1479]: 2025-05-17 00:15:37.504 [INFO][5606] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:37.528522 containerd[1479]: 2025-05-17 00:15:37.517 [WARNING][5606] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" HandleID="k8s-pod-network.648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--kube--controllers--7d6599b8b4--52gm9-eth0" May 17 00:15:37.528522 containerd[1479]: 2025-05-17 00:15:37.517 [INFO][5606] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" HandleID="k8s-pod-network.648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" Workload="ci--4081--3--3--n--16326e39d6-k8s-calico--kube--controllers--7d6599b8b4--52gm9-eth0" May 17 00:15:37.528522 containerd[1479]: 2025-05-17 00:15:37.521 [INFO][5606] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:37.528522 containerd[1479]: 2025-05-17 00:15:37.523 [INFO][5598] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9" May 17 00:15:37.528522 containerd[1479]: time="2025-05-17T00:15:37.526920013Z" level=info msg="TearDown network for sandbox \"648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9\" successfully" May 17 00:15:37.531035 containerd[1479]: time="2025-05-17T00:15:37.530975746Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:15:37.531185 containerd[1479]: time="2025-05-17T00:15:37.531100067Z" level=info msg="RemovePodSandbox \"648017c36b137b985271eb435012508ebdfe0d83d22b71b5d13412539d19aac9\" returns successfully" May 17 00:15:37.532040 containerd[1479]: time="2025-05-17T00:15:37.531817589Z" level=info msg="StopPodSandbox for \"37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a\"" May 17 00:15:37.659586 containerd[1479]: 2025-05-17 00:15:37.597 [WARNING][5620] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--n4blv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b30848da-8b46-4cdc-baaa-f3567b6377c3", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-16326e39d6", ContainerID:"f14f985d83d1befe7413123e16ac001ac95bbe341afb9adba5418bf9c4344204", Pod:"coredns-668d6bf9bc-n4blv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali53a1da8fa0f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:37.659586 containerd[1479]: 2025-05-17 00:15:37.597 [INFO][5620] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" May 17 00:15:37.659586 containerd[1479]: 2025-05-17 00:15:37.597 [INFO][5620] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" iface="eth0" netns="" May 17 00:15:37.659586 containerd[1479]: 2025-05-17 00:15:37.597 [INFO][5620] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" May 17 00:15:37.659586 containerd[1479]: 2025-05-17 00:15:37.597 [INFO][5620] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" May 17 00:15:37.659586 containerd[1479]: 2025-05-17 00:15:37.629 [INFO][5627] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" HandleID="k8s-pod-network.37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" Workload="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--n4blv-eth0" May 17 00:15:37.659586 containerd[1479]: 2025-05-17 00:15:37.630 [INFO][5627] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:37.659586 containerd[1479]: 2025-05-17 00:15:37.630 [INFO][5627] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:37.659586 containerd[1479]: 2025-05-17 00:15:37.648 [WARNING][5627] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" HandleID="k8s-pod-network.37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" Workload="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--n4blv-eth0" May 17 00:15:37.659586 containerd[1479]: 2025-05-17 00:15:37.648 [INFO][5627] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" HandleID="k8s-pod-network.37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" Workload="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--n4blv-eth0" May 17 00:15:37.659586 containerd[1479]: 2025-05-17 00:15:37.655 [INFO][5627] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:37.659586 containerd[1479]: 2025-05-17 00:15:37.657 [INFO][5620] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" May 17 00:15:37.661914 containerd[1479]: time="2025-05-17T00:15:37.659631846Z" level=info msg="TearDown network for sandbox \"37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a\" successfully" May 17 00:15:37.661914 containerd[1479]: time="2025-05-17T00:15:37.659660326Z" level=info msg="StopPodSandbox for \"37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a\" returns successfully" May 17 00:15:37.661914 containerd[1479]: time="2025-05-17T00:15:37.660306728Z" level=info msg="RemovePodSandbox for \"37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a\"" May 17 00:15:37.661914 containerd[1479]: time="2025-05-17T00:15:37.660339888Z" level=info msg="Forcibly stopping sandbox \"37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a\"" May 17 00:15:37.762280 containerd[1479]: 2025-05-17 00:15:37.712 [WARNING][5641] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--n4blv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b30848da-8b46-4cdc-baaa-f3567b6377c3", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-16326e39d6", ContainerID:"f14f985d83d1befe7413123e16ac001ac95bbe341afb9adba5418bf9c4344204", Pod:"coredns-668d6bf9bc-n4blv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali53a1da8fa0f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:37.762280 containerd[1479]: 2025-05-17 00:15:37.712 [INFO][5641] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" May 17 00:15:37.762280 containerd[1479]: 2025-05-17 00:15:37.712 [INFO][5641] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" iface="eth0" netns="" May 17 00:15:37.762280 containerd[1479]: 2025-05-17 00:15:37.712 [INFO][5641] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" May 17 00:15:37.762280 containerd[1479]: 2025-05-17 00:15:37.712 [INFO][5641] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" May 17 00:15:37.762280 containerd[1479]: 2025-05-17 00:15:37.737 [INFO][5649] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" HandleID="k8s-pod-network.37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" Workload="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--n4blv-eth0" May 17 00:15:37.762280 containerd[1479]: 2025-05-17 00:15:37.737 [INFO][5649] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:37.762280 containerd[1479]: 2025-05-17 00:15:37.737 [INFO][5649] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:37.762280 containerd[1479]: 2025-05-17 00:15:37.751 [WARNING][5649] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" HandleID="k8s-pod-network.37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" Workload="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--n4blv-eth0" May 17 00:15:37.762280 containerd[1479]: 2025-05-17 00:15:37.751 [INFO][5649] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" HandleID="k8s-pod-network.37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" Workload="ci--4081--3--3--n--16326e39d6-k8s-coredns--668d6bf9bc--n4blv-eth0" May 17 00:15:37.762280 containerd[1479]: 2025-05-17 00:15:37.753 [INFO][5649] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:37.762280 containerd[1479]: 2025-05-17 00:15:37.760 [INFO][5641] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a" May 17 00:15:37.763017 containerd[1479]: time="2025-05-17T00:15:37.762316661Z" level=info msg="TearDown network for sandbox \"37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a\" successfully" May 17 00:15:37.766826 containerd[1479]: time="2025-05-17T00:15:37.766752075Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:15:37.766826 containerd[1479]: time="2025-05-17T00:15:37.766833635Z" level=info msg="RemovePodSandbox \"37af3809ae3014497a3320c103f9e25a98510eb43ca13d2b6a2a510153c9264a\" returns successfully" May 17 00:15:44.756480 containerd[1479]: time="2025-05-17T00:15:44.755404173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:15:44.774025 kubelet[2670]: I0517 00:15:44.773954 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-fhb48" podStartSLOduration=37.689526339 podStartE2EDuration="46.773934508s" podCreationTimestamp="2025-05-17 00:14:58 +0000 UTC" firstStartedPulling="2025-05-17 00:15:22.402952434 +0000 UTC m=+46.779179484" lastFinishedPulling="2025-05-17 00:15:31.487360603 +0000 UTC m=+55.863587653" observedRunningTime="2025-05-17 00:15:32.149654985 +0000 UTC m=+56.525882035" watchObservedRunningTime="2025-05-17 00:15:44.773934508 +0000 UTC m=+69.150161558" May 17 00:15:44.986845 containerd[1479]: time="2025-05-17T00:15:44.986550864Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:15:44.988287 containerd[1479]: time="2025-05-17T00:15:44.988097308Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:15:44.988287 containerd[1479]: time="2025-05-17T00:15:44.988242869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:15:44.988813 kubelet[2670]: E0517 00:15:44.988652 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:15:44.988813 kubelet[2670]: E0517 00:15:44.988712 2670 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:15:44.991374 kubelet[2670]: E0517 00:15:44.991246 2670 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vcg88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-fqppr_calico-system(a9b260fc-ff83-4de9-ac43-723c22c032c2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:15:44.993827 kubelet[2670]: E0517 00:15:44.993190 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fqppr" podUID="a9b260fc-ff83-4de9-ac43-723c22c032c2" May 17 00:15:45.705942 kubelet[2670]: I0517 00:15:45.705666 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:15:47.757964 kubelet[2670]: E0517 00:15:47.757750 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-c64877bf5-xgbzp" podUID="ea4a179c-2064-482e-bd61-eeafaaf1f680" May 17 00:15:55.224009 systemd[1]: run-containerd-runc-k8s.io-ac27d7dab45509187bb6e9bd04117d39cfb8bb18c53082b1281bab95c3fc5614-runc.0vKbfG.mount: Deactivated successfully. May 17 00:15:58.757979 containerd[1479]: time="2025-05-17T00:15:58.757925078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:15:58.759083 kubelet[2670]: E0517 00:15:58.758406 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fqppr" podUID="a9b260fc-ff83-4de9-ac43-723c22c032c2" May 17 00:15:58.993550 containerd[1479]: time="2025-05-17T00:15:58.993292770Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:15:58.995978 containerd[1479]: time="2025-05-17T00:15:58.995813617Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:15:58.995978 containerd[1479]: time="2025-05-17T00:15:58.995941217Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:15:58.996409 kubelet[2670]: E0517 00:15:58.996338 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:15:58.996409 kubelet[2670]: E0517 00:15:58.996405 2670 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:15:58.997000 kubelet[2670]: E0517 00:15:58.996528 2670 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ecfce7dcd79642e9a67dfb965e76b411,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nf456,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c64877bf5-xgbzp_calico-system(ea4a179c-2064-482e-bd61-eeafaaf1f680): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:15:59.002008 containerd[1479]: time="2025-05-17T00:15:59.001975353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:15:59.247823 containerd[1479]: time="2025-05-17T00:15:59.247624346Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:15:59.249494 containerd[1479]: time="2025-05-17T00:15:59.249342070Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:15:59.249494 containerd[1479]: time="2025-05-17T00:15:59.249388750Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:15:59.249720 kubelet[2670]: E0517 00:15:59.249666 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:15:59.249823 kubelet[2670]: E0517 00:15:59.249724 2670 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:15:59.249947 kubelet[2670]: E0517 00:15:59.249843 2670 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nf456,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c64877bf5-xgbzp_calico-system(ea4a179c-2064-482e-bd61-eeafaaf1f680): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:15:59.252216 kubelet[2670]: E0517 00:15:59.252143 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-c64877bf5-xgbzp" podUID="ea4a179c-2064-482e-bd61-eeafaaf1f680" May 17 00:16:04.199833 kubelet[2670]: I0517 00:16:04.199187 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:16:12.756224 containerd[1479]: time="2025-05-17T00:16:12.756149667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:16:12.999189 containerd[1479]: time="2025-05-17T00:16:12.999092078Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:16:13.001440 containerd[1479]: time="2025-05-17T00:16:13.001379243Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:16:13.001629 containerd[1479]: time="2025-05-17T00:16:13.001510323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:16:13.001939 kubelet[2670]: E0517 00:16:13.001888 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:16:13.002500 kubelet[2670]: E0517 00:16:13.001945 2670 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:16:13.002500 kubelet[2670]: E0517 00:16:13.002079 2670 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vcg88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-fqppr_calico-system(a9b260fc-ff83-4de9-ac43-723c22c032c2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:16:13.003763 kubelet[2670]: E0517 00:16:13.003575 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fqppr" podUID="a9b260fc-ff83-4de9-ac43-723c22c032c2" May 17 00:16:13.764501 kubelet[2670]: E0517 00:16:13.764337 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-c64877bf5-xgbzp" podUID="ea4a179c-2064-482e-bd61-eeafaaf1f680" May 17 00:16:24.755734 kubelet[2670]: E0517 00:16:24.755610 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fqppr" podUID="a9b260fc-ff83-4de9-ac43-723c22c032c2" May 17 00:16:27.758218 kubelet[2670]: E0517 00:16:27.758037 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-c64877bf5-xgbzp" podUID="ea4a179c-2064-482e-bd61-eeafaaf1f680" May 17 00:16:36.755801 kubelet[2670]: E0517 00:16:36.755525 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fqppr" podUID="a9b260fc-ff83-4de9-ac43-723c22c032c2" May 17 00:16:37.383603 update_engine[1458]: I20250517 00:16:37.383498 1458 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 17 00:16:37.385275 update_engine[1458]: I20250517 00:16:37.384147 1458 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 17 00:16:37.385275 update_engine[1458]: I20250517 00:16:37.384507 1458 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 17 00:16:37.385814 update_engine[1458]: I20250517 00:16:37.385735 1458 omaha_request_params.cc:62] Current group set to lts May 17 00:16:37.385866 update_engine[1458]: I20250517 00:16:37.385847 1458 update_attempter.cc:499] Already updated boot flags. Skipping. May 17 00:16:37.385866 update_engine[1458]: I20250517 00:16:37.385859 1458 update_attempter.cc:643] Scheduling an action processor start. May 17 00:16:37.385911 update_engine[1458]: I20250517 00:16:37.385879 1458 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 17 00:16:37.387472 update_engine[1458]: I20250517 00:16:37.387225 1458 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 17 00:16:37.387472 update_engine[1458]: I20250517 00:16:37.387339 1458 omaha_request_action.cc:271] Posting an Omaha request to disabled May 17 00:16:37.387472 update_engine[1458]: I20250517 00:16:37.387348 1458 omaha_request_action.cc:272] Request: May 17 00:16:37.387472 update_engine[1458]: May 17 00:16:37.387472 update_engine[1458]: May 17 00:16:37.387472 update_engine[1458]: May 17 00:16:37.387472 update_engine[1458]: May 17 00:16:37.387472 update_engine[1458]: May 17 00:16:37.387472 update_engine[1458]: May 17 00:16:37.387472 update_engine[1458]: May 17 00:16:37.387472 update_engine[1458]: May 17 00:16:37.387472 update_engine[1458]: I20250517 00:16:37.387356 1458 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:16:37.393060 locksmithd[1498]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 17 00:16:37.395458 update_engine[1458]: I20250517 00:16:37.395383 1458 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:16:37.396469 update_engine[1458]: I20250517 00:16:37.396407 1458 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:16:37.397999 update_engine[1458]: E20250517 00:16:37.397926 1458 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:16:37.398151 update_engine[1458]: I20250517 00:16:37.398126 1458 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 17 00:16:41.758362 containerd[1479]: time="2025-05-17T00:16:41.756599069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:16:41.973995 containerd[1479]: time="2025-05-17T00:16:41.973866960Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:16:41.976613 containerd[1479]: time="2025-05-17T00:16:41.976471526Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:16:41.976613 containerd[1479]: time="2025-05-17T00:16:41.976556886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:16:41.977276 kubelet[2670]: E0517 00:16:41.976742 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:16:41.977276 kubelet[2670]: E0517 00:16:41.976799 2670 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:16:41.977276 kubelet[2670]: E0517 00:16:41.976912 2670 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ecfce7dcd79642e9a67dfb965e76b411,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nf456,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c64877bf5-xgbzp_calico-system(ea4a179c-2064-482e-bd61-eeafaaf1f680): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:16:41.982876 containerd[1479]: time="2025-05-17T00:16:41.982732459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:16:42.208762 containerd[1479]: time="2025-05-17T00:16:42.208416047Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:16:42.210075 containerd[1479]: time="2025-05-17T00:16:42.209813570Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:16:42.210075 containerd[1479]: time="2025-05-17T00:16:42.209869930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:16:42.210967 kubelet[2670]: E0517 00:16:42.210594 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:16:42.210967 kubelet[2670]: E0517 00:16:42.210670 2670 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:16:42.210967 kubelet[2670]: E0517 00:16:42.210851 2670 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nf456,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c64877bf5-xgbzp_calico-system(ea4a179c-2064-482e-bd61-eeafaaf1f680): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:16:42.212554 kubelet[2670]: E0517 00:16:42.212409 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-c64877bf5-xgbzp" podUID="ea4a179c-2064-482e-bd61-eeafaaf1f680" May 17 00:16:47.327578 update_engine[1458]: I20250517 00:16:47.327476 1458 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:16:47.328078 update_engine[1458]: I20250517 00:16:47.327791 1458 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:16:47.328295 update_engine[1458]: I20250517 00:16:47.328224 1458 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:16:47.329179 update_engine[1458]: E20250517 00:16:47.329113 1458 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:16:47.329307 update_engine[1458]: I20250517 00:16:47.329200 1458 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 17 00:16:48.756549 kubelet[2670]: E0517 00:16:48.755767 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fqppr" podUID="a9b260fc-ff83-4de9-ac43-723c22c032c2" May 17 00:16:54.761674 kubelet[2670]: E0517 00:16:54.760746 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-c64877bf5-xgbzp" podUID="ea4a179c-2064-482e-bd61-eeafaaf1f680" May 17 00:16:57.327272 update_engine[1458]: I20250517 00:16:57.327139 1458 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:16:57.327913 update_engine[1458]: I20250517 00:16:57.327555 1458 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:16:57.327985 update_engine[1458]: I20250517 00:16:57.327919 1458 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:16:57.329029 update_engine[1458]: E20250517 00:16:57.328953 1458 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:16:57.329129 update_engine[1458]: I20250517 00:16:57.329078 1458 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 17 00:17:02.755943 containerd[1479]: time="2025-05-17T00:17:02.755701797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:17:03.016235 containerd[1479]: time="2025-05-17T00:17:03.015700755Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:17:03.017599 containerd[1479]: time="2025-05-17T00:17:03.017520959Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:17:03.017970 containerd[1479]: time="2025-05-17T00:17:03.017688599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:17:03.018376 kubelet[2670]: E0517 00:17:03.018114 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:17:03.018376 kubelet[2670]: E0517 00:17:03.018168 2670 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:17:03.018376 kubelet[2670]: E0517 00:17:03.018303 2670 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vcg88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-fqppr_calico-system(a9b260fc-ff83-4de9-ac43-723c22c032c2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:17:03.019570 kubelet[2670]: E0517 00:17:03.019513 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fqppr" podUID="a9b260fc-ff83-4de9-ac43-723c22c032c2" May 17 00:17:07.323738 update_engine[1458]: I20250517 00:17:07.323511 1458 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:17:07.324302 update_engine[1458]: I20250517 00:17:07.323911 1458 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:17:07.324302 update_engine[1458]: I20250517 00:17:07.324229 1458 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:17:07.325126 update_engine[1458]: E20250517 00:17:07.325034 1458 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:17:07.325126 update_engine[1458]: I20250517 00:17:07.325125 1458 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 17 00:17:07.325317 update_engine[1458]: I20250517 00:17:07.325137 1458 omaha_request_action.cc:617] Omaha request response: May 17 00:17:07.325317 update_engine[1458]: E20250517 00:17:07.325229 1458 omaha_request_action.cc:636] Omaha request network transfer failed. May 17 00:17:07.325317 update_engine[1458]: I20250517 00:17:07.325249 1458 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 17 00:17:07.325317 update_engine[1458]: I20250517 00:17:07.325256 1458 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 00:17:07.325317 update_engine[1458]: I20250517 00:17:07.325263 1458 update_attempter.cc:306] Processing Done. May 17 00:17:07.325317 update_engine[1458]: E20250517 00:17:07.325278 1458 update_attempter.cc:619] Update failed. May 17 00:17:07.325317 update_engine[1458]: I20250517 00:17:07.325285 1458 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 17 00:17:07.325317 update_engine[1458]: I20250517 00:17:07.325291 1458 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 17 00:17:07.325317 update_engine[1458]: I20250517 00:17:07.325299 1458 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 17 00:17:07.325686 update_engine[1458]: I20250517 00:17:07.325376 1458 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 17 00:17:07.325686 update_engine[1458]: I20250517 00:17:07.325417 1458 omaha_request_action.cc:271] Posting an Omaha request to disabled May 17 00:17:07.325686 update_engine[1458]: I20250517 00:17:07.325438 1458 omaha_request_action.cc:272] Request: May 17 00:17:07.325686 update_engine[1458]: May 17 00:17:07.325686 update_engine[1458]: May 17 00:17:07.325686 update_engine[1458]: May 17 00:17:07.325686 update_engine[1458]: May 17 00:17:07.325686 update_engine[1458]: May 17 00:17:07.325686 update_engine[1458]: May 17 00:17:07.325686 update_engine[1458]: I20250517 00:17:07.325446 1458 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:17:07.325686 update_engine[1458]: I20250517 00:17:07.325618 1458 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:17:07.326115 update_engine[1458]: I20250517 00:17:07.325845 1458 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:17:07.326458 locksmithd[1498]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 17 00:17:07.327052 update_engine[1458]: E20250517 00:17:07.326561 1458 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:17:07.327052 update_engine[1458]: I20250517 00:17:07.326622 1458 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 17 00:17:07.327052 update_engine[1458]: I20250517 00:17:07.326632 1458 omaha_request_action.cc:617] Omaha request response: May 17 00:17:07.327052 update_engine[1458]: I20250517 00:17:07.326639 1458 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 00:17:07.327052 update_engine[1458]: I20250517 00:17:07.326647 1458 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 00:17:07.327052 update_engine[1458]: I20250517 00:17:07.326655 1458 update_attempter.cc:306] Processing Done. May 17 00:17:07.327052 update_engine[1458]: I20250517 00:17:07.326663 1458 update_attempter.cc:310] Error event sent. May 17 00:17:07.327052 update_engine[1458]: I20250517 00:17:07.326675 1458 update_check_scheduler.cc:74] Next update check in 45m44s May 17 00:17:07.327485 locksmithd[1498]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 17 00:17:08.757150 kubelet[2670]: E0517 00:17:08.756908 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-c64877bf5-xgbzp" podUID="ea4a179c-2064-482e-bd61-eeafaaf1f680" May 17 00:17:13.756382 kubelet[2670]: E0517 00:17:13.756274 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fqppr" podUID="a9b260fc-ff83-4de9-ac43-723c22c032c2" May 17 00:17:20.757498 kubelet[2670]: E0517 00:17:20.757087 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-c64877bf5-xgbzp" podUID="ea4a179c-2064-482e-bd61-eeafaaf1f680" May 17 00:17:27.757482 kubelet[2670]: E0517 00:17:27.757013 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fqppr" podUID="a9b260fc-ff83-4de9-ac43-723c22c032c2" May 17 00:17:33.759378 kubelet[2670]: E0517 00:17:33.759169 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-c64877bf5-xgbzp" podUID="ea4a179c-2064-482e-bd61-eeafaaf1f680" May 17 00:17:38.755473 kubelet[2670]: E0517 00:17:38.755395 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fqppr" podUID="a9b260fc-ff83-4de9-ac43-723c22c032c2" May 17 00:17:47.769238 kubelet[2670]: E0517 00:17:47.769178 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-c64877bf5-xgbzp" podUID="ea4a179c-2064-482e-bd61-eeafaaf1f680" May 17 00:17:50.755194 kubelet[2670]: E0517 00:17:50.755049 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fqppr" podUID="a9b260fc-ff83-4de9-ac43-723c22c032c2" May 17 00:18:01.757670 kubelet[2670]: E0517 00:18:01.757331 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-c64877bf5-xgbzp" podUID="ea4a179c-2064-482e-bd61-eeafaaf1f680" May 17 00:18:04.755468 kubelet[2670]: E0517 00:18:04.755252 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fqppr" podUID="a9b260fc-ff83-4de9-ac43-723c22c032c2" May 17 00:18:16.756324 containerd[1479]: time="2025-05-17T00:18:16.756262459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:18:17.012024 containerd[1479]: time="2025-05-17T00:18:17.011812547Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:18:17.014995 containerd[1479]: time="2025-05-17T00:18:17.014898513Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:18:17.015283 containerd[1479]: time="2025-05-17T00:18:17.014940473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:18:17.015331 kubelet[2670]: E0517 00:18:17.015269 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:18:17.015804 kubelet[2670]: E0517 00:18:17.015344 2670 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:18:17.015804 kubelet[2670]: E0517 00:18:17.015561 2670 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ecfce7dcd79642e9a67dfb965e76b411,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nf456,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c64877bf5-xgbzp_calico-system(ea4a179c-2064-482e-bd61-eeafaaf1f680): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:18:17.018220 containerd[1479]: time="2025-05-17T00:18:17.018179119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:18:17.264390 containerd[1479]: time="2025-05-17T00:18:17.264110869Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:18:17.266453 containerd[1479]: time="2025-05-17T00:18:17.266289633Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:18:17.266742 containerd[1479]: time="2025-05-17T00:18:17.266359273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:18:17.267302 kubelet[2670]: E0517 00:18:17.266835 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:18:17.267302 kubelet[2670]: E0517 00:18:17.266927 2670 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:18:17.267302 kubelet[2670]: E0517 00:18:17.267135 2670 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nf456,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c64877bf5-xgbzp_calico-system(ea4a179c-2064-482e-bd61-eeafaaf1f680): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:18:17.269471 kubelet[2670]: E0517 00:18:17.268522 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-c64877bf5-xgbzp" podUID="ea4a179c-2064-482e-bd61-eeafaaf1f680" May 17 00:18:18.755012 kubelet[2670]: E0517 00:18:18.754917 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fqppr" podUID="a9b260fc-ff83-4de9-ac43-723c22c032c2" May 17 00:18:30.759110 kubelet[2670]: E0517 00:18:30.758807 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-c64877bf5-xgbzp" podUID="ea4a179c-2064-482e-bd61-eeafaaf1f680" May 17 00:18:31.756601 containerd[1479]: time="2025-05-17T00:18:31.755616100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:18:31.993799 containerd[1479]: time="2025-05-17T00:18:31.993736034Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:18:31.995354 containerd[1479]: time="2025-05-17T00:18:31.995254477Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:18:31.995499 containerd[1479]: time="2025-05-17T00:18:31.995458757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:18:31.995772 kubelet[2670]: E0517 00:18:31.995704 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:18:31.996121 kubelet[2670]: E0517 00:18:31.995777 2670 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:18:31.996121 kubelet[2670]: E0517 00:18:31.995976 2670 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vcg88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-fqppr_calico-system(a9b260fc-ff83-4de9-ac43-723c22c032c2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:18:31.998128 kubelet[2670]: E0517 00:18:31.998071 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fqppr" podUID="a9b260fc-ff83-4de9-ac43-723c22c032c2" May 17 00:18:41.758952 kubelet[2670]: E0517 00:18:41.758332 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-c64877bf5-xgbzp" podUID="ea4a179c-2064-482e-bd61-eeafaaf1f680" May 17 00:18:46.756486 kubelet[2670]: E0517 00:18:46.756004 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fqppr" podUID="a9b260fc-ff83-4de9-ac43-723c22c032c2" May 17 00:18:55.219571 systemd[1]: run-containerd-runc-k8s.io-ac27d7dab45509187bb6e9bd04117d39cfb8bb18c53082b1281bab95c3fc5614-runc.r8pbrm.mount: Deactivated successfully. May 17 00:18:55.761211 kubelet[2670]: E0517 00:18:55.760928 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-c64877bf5-xgbzp" podUID="ea4a179c-2064-482e-bd61-eeafaaf1f680" May 17 00:19:00.755402 kubelet[2670]: E0517 00:19:00.755308 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fqppr" podUID="a9b260fc-ff83-4de9-ac43-723c22c032c2" May 17 00:19:07.758067 kubelet[2670]: E0517 00:19:07.757918 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-c64877bf5-xgbzp" podUID="ea4a179c-2064-482e-bd61-eeafaaf1f680" May 17 00:19:11.755542 kubelet[2670]: E0517 00:19:11.754707 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fqppr" podUID="a9b260fc-ff83-4de9-ac43-723c22c032c2" May 17 00:19:18.763468 kubelet[2670]: E0517 00:19:18.761444 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-c64877bf5-xgbzp" podUID="ea4a179c-2064-482e-bd61-eeafaaf1f680" May 17 00:19:23.756508 kubelet[2670]: E0517 00:19:23.755101 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fqppr" podUID="a9b260fc-ff83-4de9-ac43-723c22c032c2" May 17 00:19:27.113270 systemd[1]: Started sshd@7-142.132.181.146:22-139.178.68.195:50796.service - OpenSSH per-connection server daemon (139.178.68.195:50796). May 17 00:19:28.123576 sshd[6145]: Accepted publickey for core from 139.178.68.195 port 50796 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:19:28.126512 sshd[6145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:19:28.134523 systemd-logind[1457]: New session 8 of user core. May 17 00:19:28.140056 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 00:19:28.954189 sshd[6145]: pam_unix(sshd:session): session closed for user core May 17 00:19:28.966853 systemd[1]: sshd@7-142.132.181.146:22-139.178.68.195:50796.service: Deactivated successfully. May 17 00:19:28.967136 systemd-logind[1457]: Session 8 logged out. Waiting for processes to exit. May 17 00:19:28.971781 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:19:28.974702 systemd-logind[1457]: Removed session 8. May 17 00:19:31.758650 kubelet[2670]: E0517 00:19:31.758576 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-c64877bf5-xgbzp" podUID="ea4a179c-2064-482e-bd61-eeafaaf1f680" May 17 00:19:34.126734 systemd[1]: Started sshd@8-142.132.181.146:22-139.178.68.195:47310.service - OpenSSH per-connection server daemon (139.178.68.195:47310). May 17 00:19:35.125994 sshd[6178]: Accepted publickey for core from 139.178.68.195 port 47310 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:19:35.128613 sshd[6178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:19:35.135333 systemd-logind[1457]: New session 9 of user core. May 17 00:19:35.142707 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 00:19:35.891915 sshd[6178]: pam_unix(sshd:session): session closed for user core May 17 00:19:35.900619 systemd-logind[1457]: Session 9 logged out. Waiting for processes to exit. May 17 00:19:35.902075 systemd[1]: sshd@8-142.132.181.146:22-139.178.68.195:47310.service: Deactivated successfully. May 17 00:19:35.905691 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:19:35.907367 systemd-logind[1457]: Removed session 9. May 17 00:19:36.060283 systemd[1]: Started sshd@9-142.132.181.146:22-139.178.68.195:47320.service - OpenSSH per-connection server daemon (139.178.68.195:47320). May 17 00:19:37.044875 sshd[6195]: Accepted publickey for core from 139.178.68.195 port 47320 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:19:37.047880 sshd[6195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:19:37.054255 systemd-logind[1457]: New session 10 of user core. May 17 00:19:37.059985 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 00:19:37.759160 kubelet[2670]: E0517 00:19:37.759100 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fqppr" podUID="a9b260fc-ff83-4de9-ac43-723c22c032c2" May 17 00:19:37.871583 sshd[6195]: pam_unix(sshd:session): session closed for user core May 17 00:19:37.877131 systemd[1]: sshd@9-142.132.181.146:22-139.178.68.195:47320.service: Deactivated successfully. May 17 00:19:37.881076 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:19:37.882604 systemd-logind[1457]: Session 10 logged out. Waiting for processes to exit. May 17 00:19:37.883661 systemd-logind[1457]: Removed session 10. May 17 00:19:38.047852 systemd[1]: Started sshd@10-142.132.181.146:22-139.178.68.195:47334.service - OpenSSH per-connection server daemon (139.178.68.195:47334). May 17 00:19:39.036935 sshd[6206]: Accepted publickey for core from 139.178.68.195 port 47334 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:19:39.039163 sshd[6206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:19:39.044686 systemd-logind[1457]: New session 11 of user core. May 17 00:19:39.048649 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 00:19:39.834339 sshd[6206]: pam_unix(sshd:session): session closed for user core May 17 00:19:39.840299 systemd[1]: sshd@10-142.132.181.146:22-139.178.68.195:47334.service: Deactivated successfully. May 17 00:19:39.844251 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:19:39.846115 systemd-logind[1457]: Session 11 logged out. Waiting for processes to exit. May 17 00:19:39.847514 systemd-logind[1457]: Removed session 11. May 17 00:19:42.756798 kubelet[2670]: E0517 00:19:42.756529 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-c64877bf5-xgbzp" podUID="ea4a179c-2064-482e-bd61-eeafaaf1f680" May 17 00:19:45.020117 systemd[1]: Started sshd@11-142.132.181.146:22-139.178.68.195:33044.service - OpenSSH per-connection server daemon (139.178.68.195:33044). May 17 00:19:46.016082 sshd[6225]: Accepted publickey for core from 139.178.68.195 port 33044 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:19:46.019495 sshd[6225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:19:46.025593 systemd-logind[1457]: New session 12 of user core. May 17 00:19:46.030618 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 00:19:46.769774 sshd[6225]: pam_unix(sshd:session): session closed for user core May 17 00:19:46.774480 systemd[1]: sshd@11-142.132.181.146:22-139.178.68.195:33044.service: Deactivated successfully. May 17 00:19:46.777418 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:19:46.780598 systemd-logind[1457]: Session 12 logged out. Waiting for processes to exit. May 17 00:19:46.783894 systemd-logind[1457]: Removed session 12. May 17 00:19:48.756153 kubelet[2670]: E0517 00:19:48.755701 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fqppr" podUID="a9b260fc-ff83-4de9-ac43-723c22c032c2" May 17 00:19:51.952266 systemd[1]: Started sshd@12-142.132.181.146:22-139.178.68.195:33054.service - OpenSSH per-connection server daemon (139.178.68.195:33054). May 17 00:19:52.929497 sshd[6252]: Accepted publickey for core from 139.178.68.195 port 33054 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:19:52.932053 sshd[6252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:19:52.940261 systemd-logind[1457]: New session 13 of user core. May 17 00:19:52.950735 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 00:19:53.703564 sshd[6252]: pam_unix(sshd:session): session closed for user core May 17 00:19:53.708156 systemd-logind[1457]: Session 13 logged out. Waiting for processes to exit. May 17 00:19:53.708414 systemd[1]: sshd@12-142.132.181.146:22-139.178.68.195:33054.service: Deactivated successfully. May 17 00:19:53.711174 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:19:53.715609 systemd-logind[1457]: Removed session 13. May 17 00:19:57.757282 kubelet[2670]: E0517 00:19:57.757168 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-c64877bf5-xgbzp" podUID="ea4a179c-2064-482e-bd61-eeafaaf1f680" May 17 00:19:58.879334 systemd[1]: Started sshd@13-142.132.181.146:22-139.178.68.195:59830.service - OpenSSH per-connection server daemon (139.178.68.195:59830). May 17 00:19:59.891888 sshd[6306]: Accepted publickey for core from 139.178.68.195 port 59830 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:19:59.896053 sshd[6306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:19:59.903554 systemd-logind[1457]: New session 14 of user core. May 17 00:19:59.909647 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 00:20:00.680028 sshd[6306]: pam_unix(sshd:session): session closed for user core May 17 00:20:00.685102 systemd[1]: sshd@13-142.132.181.146:22-139.178.68.195:59830.service: Deactivated successfully. May 17 00:20:00.690029 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:20:00.693187 systemd-logind[1457]: Session 14 logged out. Waiting for processes to exit. May 17 00:20:00.695851 systemd-logind[1457]: Removed session 14. May 17 00:20:00.849559 systemd[1]: Started sshd@14-142.132.181.146:22-139.178.68.195:59834.service - OpenSSH per-connection server daemon (139.178.68.195:59834). May 17 00:20:01.826489 sshd[6345]: Accepted publickey for core from 139.178.68.195 port 59834 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:20:01.829249 sshd[6345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:20:01.835668 systemd-logind[1457]: New session 15 of user core. May 17 00:20:01.842870 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 00:20:02.763728 sshd[6345]: pam_unix(sshd:session): session closed for user core May 17 00:20:02.770117 systemd[1]: sshd@14-142.132.181.146:22-139.178.68.195:59834.service: Deactivated successfully. May 17 00:20:02.772558 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:20:02.777206 systemd-logind[1457]: Session 15 logged out. Waiting for processes to exit. May 17 00:20:02.779018 systemd-logind[1457]: Removed session 15. May 17 00:20:02.942867 systemd[1]: Started sshd@15-142.132.181.146:22-139.178.68.195:59842.service - OpenSSH per-connection server daemon (139.178.68.195:59842). May 17 00:20:03.757828 kubelet[2670]: E0517 00:20:03.757738 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fqppr" podUID="a9b260fc-ff83-4de9-ac43-723c22c032c2" May 17 00:20:03.926765 sshd[6356]: Accepted publickey for core from 139.178.68.195 port 59842 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:20:03.928644 sshd[6356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:20:03.935486 systemd-logind[1457]: New session 16 of user core. May 17 00:20:03.940716 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 00:20:05.680945 sshd[6356]: pam_unix(sshd:session): session closed for user core May 17 00:20:05.687224 systemd[1]: sshd@15-142.132.181.146:22-139.178.68.195:59842.service: Deactivated successfully. May 17 00:20:05.692304 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:20:05.695639 systemd-logind[1457]: Session 16 logged out. Waiting for processes to exit. May 17 00:20:05.697319 systemd-logind[1457]: Removed session 16. May 17 00:20:05.855818 systemd[1]: Started sshd@16-142.132.181.146:22-139.178.68.195:56288.service - OpenSSH per-connection server daemon (139.178.68.195:56288). May 17 00:20:06.856494 sshd[6376]: Accepted publickey for core from 139.178.68.195 port 56288 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:20:06.858296 sshd[6376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:20:06.863731 systemd-logind[1457]: New session 17 of user core. May 17 00:20:06.872662 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 00:20:07.753244 sshd[6376]: pam_unix(sshd:session): session closed for user core May 17 00:20:07.762345 systemd-logind[1457]: Session 17 logged out. Waiting for processes to exit. May 17 00:20:07.763040 systemd[1]: sshd@16-142.132.181.146:22-139.178.68.195:56288.service: Deactivated successfully. May 17 00:20:07.766545 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:20:07.767969 systemd-logind[1457]: Removed session 17. May 17 00:20:07.933945 systemd[1]: Started sshd@17-142.132.181.146:22-139.178.68.195:56300.service - OpenSSH per-connection server daemon (139.178.68.195:56300). May 17 00:20:08.759019 kubelet[2670]: E0517 00:20:08.758938 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-c64877bf5-xgbzp" podUID="ea4a179c-2064-482e-bd61-eeafaaf1f680" May 17 00:20:08.931098 sshd[6387]: Accepted publickey for core from 139.178.68.195 port 56300 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:20:08.933824 sshd[6387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:20:08.940921 systemd-logind[1457]: New session 18 of user core. May 17 00:20:08.945708 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 00:20:09.714737 sshd[6387]: pam_unix(sshd:session): session closed for user core May 17 00:20:09.719760 systemd[1]: sshd@17-142.132.181.146:22-139.178.68.195:56300.service: Deactivated successfully. May 17 00:20:09.723120 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:20:09.725707 systemd-logind[1457]: Session 18 logged out. Waiting for processes to exit. May 17 00:20:09.728366 systemd-logind[1457]: Removed session 18. May 17 00:20:14.891065 systemd[1]: Started sshd@18-142.132.181.146:22-139.178.68.195:51940.service - OpenSSH per-connection server daemon (139.178.68.195:51940). May 17 00:20:15.887200 sshd[6403]: Accepted publickey for core from 139.178.68.195 port 51940 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:20:15.890614 sshd[6403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:20:15.899504 systemd-logind[1457]: New session 19 of user core. May 17 00:20:15.905699 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 00:20:16.656886 sshd[6403]: pam_unix(sshd:session): session closed for user core May 17 00:20:16.663459 systemd-logind[1457]: Session 19 logged out. Waiting for processes to exit. May 17 00:20:16.664074 systemd[1]: sshd@18-142.132.181.146:22-139.178.68.195:51940.service: Deactivated successfully. May 17 00:20:16.667065 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:20:16.668626 systemd-logind[1457]: Removed session 19. May 17 00:20:17.756313 kubelet[2670]: E0517 00:20:17.755874 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fqppr" podUID="a9b260fc-ff83-4de9-ac43-723c22c032c2" May 17 00:20:21.833830 systemd[1]: Started sshd@19-142.132.181.146:22-139.178.68.195:51952.service - OpenSSH per-connection server daemon (139.178.68.195:51952). May 17 00:20:22.760331 kubelet[2670]: E0517 00:20:22.760279 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-c64877bf5-xgbzp" podUID="ea4a179c-2064-482e-bd61-eeafaaf1f680" May 17 00:20:22.834486 sshd[6416]: Accepted publickey for core from 139.178.68.195 port 51952 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:20:22.836185 sshd[6416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:20:22.841860 systemd-logind[1457]: New session 20 of user core. May 17 00:20:22.851765 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 00:20:23.613117 sshd[6416]: pam_unix(sshd:session): session closed for user core May 17 00:20:23.627808 systemd[1]: sshd@19-142.132.181.146:22-139.178.68.195:51952.service: Deactivated successfully. May 17 00:20:23.638911 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:20:23.643368 systemd-logind[1457]: Session 20 logged out. Waiting for processes to exit. May 17 00:20:23.649194 systemd-logind[1457]: Removed session 20. May 17 00:20:28.754887 kubelet[2670]: E0517 00:20:28.754523 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fqppr" podUID="a9b260fc-ff83-4de9-ac43-723c22c032c2" May 17 00:20:34.756701 kubelet[2670]: E0517 00:20:34.756604 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-c64877bf5-xgbzp" podUID="ea4a179c-2064-482e-bd61-eeafaaf1f680" May 17 00:20:38.843408 systemd[1]: cri-containerd-05dee176135b8342215bbefed30800dd91f79b647119a988adb5a7af21ad0538.scope: Deactivated successfully. May 17 00:20:38.843782 systemd[1]: cri-containerd-05dee176135b8342215bbefed30800dd91f79b647119a988adb5a7af21ad0538.scope: Consumed 6.627s CPU time, 17.4M memory peak, 0B memory swap peak. May 17 00:20:38.876821 containerd[1479]: time="2025-05-17T00:20:38.876581223Z" level=info msg="shim disconnected" id=05dee176135b8342215bbefed30800dd91f79b647119a988adb5a7af21ad0538 namespace=k8s.io May 17 00:20:38.876821 containerd[1479]: time="2025-05-17T00:20:38.876653743Z" level=warning msg="cleaning up after shim disconnected" id=05dee176135b8342215bbefed30800dd91f79b647119a988adb5a7af21ad0538 namespace=k8s.io May 17 00:20:38.876821 containerd[1479]: time="2025-05-17T00:20:38.876662023Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:20:38.882413 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05dee176135b8342215bbefed30800dd91f79b647119a988adb5a7af21ad0538-rootfs.mount: Deactivated successfully. May 17 00:20:39.120971 kubelet[2670]: I0517 00:20:39.120541 2670 scope.go:117] "RemoveContainer" containerID="05dee176135b8342215bbefed30800dd91f79b647119a988adb5a7af21ad0538" May 17 00:20:39.125325 containerd[1479]: time="2025-05-17T00:20:39.125253295Z" level=info msg="CreateContainer within sandbox \"b2d0bf8ee677e73787992519f96b4c12c16204f6eff86ae0276e72fdac459070\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 17 00:20:39.141316 containerd[1479]: time="2025-05-17T00:20:39.141248406Z" level=info msg="CreateContainer within sandbox \"b2d0bf8ee677e73787992519f96b4c12c16204f6eff86ae0276e72fdac459070\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"9e34b31333618e6b0f2f36311cd9cec171870d186495685f81d7b0a1b8bb7637\"" May 17 00:20:39.142730 containerd[1479]: time="2025-05-17T00:20:39.142687608Z" level=info msg="StartContainer for \"9e34b31333618e6b0f2f36311cd9cec171870d186495685f81d7b0a1b8bb7637\"" May 17 00:20:39.180669 systemd[1]: Started cri-containerd-9e34b31333618e6b0f2f36311cd9cec171870d186495685f81d7b0a1b8bb7637.scope - libcontainer container 9e34b31333618e6b0f2f36311cd9cec171870d186495685f81d7b0a1b8bb7637. May 17 00:20:39.218512 containerd[1479]: time="2025-05-17T00:20:39.217808991Z" level=info msg="StartContainer for \"9e34b31333618e6b0f2f36311cd9cec171870d186495685f81d7b0a1b8bb7637\" returns successfully" May 17 00:20:39.286731 kubelet[2670]: E0517 00:20:39.286670 2670 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:36958->10.0.0.2:2379: read: connection timed out" May 17 00:20:39.779081 systemd[1]: cri-containerd-d2e59e4b3942cd680320f14c3d1570bfb67d5f1a79750f66d3b54339daa25f4c.scope: Deactivated successfully. May 17 00:20:39.780772 systemd[1]: cri-containerd-d2e59e4b3942cd680320f14c3d1570bfb67d5f1a79750f66d3b54339daa25f4c.scope: Consumed 23.758s CPU time. May 17 00:20:39.812057 containerd[1479]: time="2025-05-17T00:20:39.811979840Z" level=info msg="shim disconnected" id=d2e59e4b3942cd680320f14c3d1570bfb67d5f1a79750f66d3b54339daa25f4c namespace=k8s.io May 17 00:20:39.812057 containerd[1479]: time="2025-05-17T00:20:39.812054560Z" level=warning msg="cleaning up after shim disconnected" id=d2e59e4b3942cd680320f14c3d1570bfb67d5f1a79750f66d3b54339daa25f4c namespace=k8s.io May 17 00:20:39.812057 containerd[1479]: time="2025-05-17T00:20:39.812065880Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:20:39.879089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2e59e4b3942cd680320f14c3d1570bfb67d5f1a79750f66d3b54339daa25f4c-rootfs.mount: Deactivated successfully. May 17 00:20:40.132319 kubelet[2670]: I0517 00:20:40.132204 2670 scope.go:117] "RemoveContainer" containerID="d2e59e4b3942cd680320f14c3d1570bfb67d5f1a79750f66d3b54339daa25f4c" May 17 00:20:40.145179 containerd[1479]: time="2025-05-17T00:20:40.145129033Z" level=info msg="CreateContainer within sandbox \"e9a0773c4a312b9fa6e12b6887108d6c3da87e445951eb571dba78e90402b6ad\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" May 17 00:20:40.174817 containerd[1479]: time="2025-05-17T00:20:40.174763009Z" level=info msg="CreateContainer within sandbox \"e9a0773c4a312b9fa6e12b6887108d6c3da87e445951eb571dba78e90402b6ad\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"7138066e7ac2d3aa2d533391f0067e233bdd93c9bc24647227f17651a5e2c309\"" May 17 00:20:40.177760 containerd[1479]: time="2025-05-17T00:20:40.177720575Z" level=info msg="StartContainer for \"7138066e7ac2d3aa2d533391f0067e233bdd93c9bc24647227f17651a5e2c309\"" May 17 00:20:40.234678 systemd[1]: Started cri-containerd-7138066e7ac2d3aa2d533391f0067e233bdd93c9bc24647227f17651a5e2c309.scope - libcontainer container 7138066e7ac2d3aa2d533391f0067e233bdd93c9bc24647227f17651a5e2c309. May 17 00:20:40.274345 containerd[1479]: time="2025-05-17T00:20:40.274295839Z" level=info msg="StartContainer for \"7138066e7ac2d3aa2d533391f0067e233bdd93c9bc24647227f17651a5e2c309\" returns successfully" May 17 00:20:42.754982 kubelet[2670]: E0517 00:20:42.754895 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fqppr" podUID="a9b260fc-ff83-4de9-ac43-723c22c032c2" May 17 00:20:43.128754 kubelet[2670]: E0517 00:20:43.119966 2670 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:36774->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-3-n-16326e39d6.1840288af2493dd2 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-3-n-16326e39d6,UID:63ca978206f012db5d01e3627ba7053b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-16326e39d6,},FirstTimestamp:2025-05-17 00:20:32.655236562 +0000 UTC m=+357.031463652,LastTimestamp:2025-05-17 00:20:32.655236562 +0000 UTC m=+357.031463652,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-16326e39d6,}" May 17 00:20:44.725739 systemd[1]: cri-containerd-eb001af5b5a773f98b6be4562590f0e90a8f3f929cab5b016adaac4d2b39a607.scope: Deactivated successfully. May 17 00:20:44.726363 systemd[1]: cri-containerd-eb001af5b5a773f98b6be4562590f0e90a8f3f929cab5b016adaac4d2b39a607.scope: Consumed 5.868s CPU time, 15.6M memory peak, 0B memory swap peak. May 17 00:20:44.753059 containerd[1479]: time="2025-05-17T00:20:44.752843828Z" level=info msg="shim disconnected" id=eb001af5b5a773f98b6be4562590f0e90a8f3f929cab5b016adaac4d2b39a607 namespace=k8s.io May 17 00:20:44.753059 containerd[1479]: time="2025-05-17T00:20:44.752898348Z" level=warning msg="cleaning up after shim disconnected" id=eb001af5b5a773f98b6be4562590f0e90a8f3f929cab5b016adaac4d2b39a607 namespace=k8s.io May 17 00:20:44.753059 containerd[1479]: time="2025-05-17T00:20:44.752906948Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:20:44.756289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb001af5b5a773f98b6be4562590f0e90a8f3f929cab5b016adaac4d2b39a607-rootfs.mount: Deactivated successfully. May 17 00:20:45.151187 kubelet[2670]: I0517 00:20:45.150951 2670 scope.go:117] "RemoveContainer" containerID="eb001af5b5a773f98b6be4562590f0e90a8f3f929cab5b016adaac4d2b39a607" May 17 00:20:45.163399 containerd[1479]: time="2025-05-17T00:20:45.163346848Z" level=info msg="CreateContainer within sandbox \"af7595122e6ad4847c7c08cb14a38588feae02bce8ee8dc7071d2de2ea1c1ca0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 17 00:20:45.190518 containerd[1479]: time="2025-05-17T00:20:45.190467219Z" level=info msg="CreateContainer within sandbox \"af7595122e6ad4847c7c08cb14a38588feae02bce8ee8dc7071d2de2ea1c1ca0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"702c131152f0ef41e78061c9f75692d99b995ebbb1956ece37770a51d7f3faaa\"" May 17 00:20:45.191043 containerd[1479]: time="2025-05-17T00:20:45.191015220Z" level=info msg="StartContainer for \"702c131152f0ef41e78061c9f75692d99b995ebbb1956ece37770a51d7f3faaa\"" May 17 00:20:45.225623 systemd[1]: Started cri-containerd-702c131152f0ef41e78061c9f75692d99b995ebbb1956ece37770a51d7f3faaa.scope - libcontainer container 702c131152f0ef41e78061c9f75692d99b995ebbb1956ece37770a51d7f3faaa. May 17 00:20:45.265896 containerd[1479]: time="2025-05-17T00:20:45.265849642Z" level=info msg="StartContainer for \"702c131152f0ef41e78061c9f75692d99b995ebbb1956ece37770a51d7f3faaa\" returns successfully"