May 17 00:06:41.890122 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 17 00:06:41.890156 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri May 16 22:39:35 -00 2025 May 17 00:06:41.890169 kernel: KASLR enabled May 17 00:06:41.890175 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II May 17 00:06:41.890181 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 May 17 00:06:41.890186 kernel: random: crng init done May 17 00:06:41.890194 kernel: ACPI: Early table checksum verification disabled May 17 00:06:41.890200 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) May 17 00:06:41.890207 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) May 17 00:06:41.890218 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:06:41.890225 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:06:41.890231 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:06:41.890237 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:06:41.890243 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:06:41.890251 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:06:41.890259 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:06:41.890266 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:06:41.890272 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:06:41.890279 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) May 17 00:06:41.890285 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 May 17 00:06:41.890292 kernel: NUMA: Failed to initialise from firmware May 17 00:06:41.890299 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] May 17 00:06:41.890305 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] May 17 00:06:41.890312 kernel: Zone ranges: May 17 00:06:41.890318 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] May 17 00:06:41.890326 kernel: DMA32 empty May 17 00:06:41.890333 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] May 17 00:06:41.890339 kernel: Movable zone start for each node May 17 00:06:41.890346 kernel: Early memory node ranges May 17 00:06:41.890352 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] May 17 00:06:41.890359 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] May 17 00:06:41.890365 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] May 17 00:06:41.890372 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] May 17 00:06:41.890378 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] May 17 00:06:41.890385 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] May 17 00:06:41.890391 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] May 17 00:06:41.890398 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] May 17 00:06:41.890406 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges May 17 00:06:41.890413 kernel: psci: probing for conduit method from ACPI. May 17 00:06:41.890420 kernel: psci: PSCIv1.1 detected in firmware. May 17 00:06:41.890429 kernel: psci: Using standard PSCI v0.2 function IDs May 17 00:06:41.890436 kernel: psci: Trusted OS migration not required May 17 00:06:41.890443 kernel: psci: SMC Calling Convention v1.1 May 17 00:06:41.890452 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 17 00:06:41.890459 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 17 00:06:41.890470 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 17 00:06:41.890477 kernel: pcpu-alloc: [0] 0 [0] 1 May 17 00:06:41.890484 kernel: Detected PIPT I-cache on CPU0 May 17 00:06:41.890491 kernel: CPU features: detected: GIC system register CPU interface May 17 00:06:41.890498 kernel: CPU features: detected: Hardware dirty bit management May 17 00:06:41.890505 kernel: CPU features: detected: Spectre-v4 May 17 00:06:41.890512 kernel: CPU features: detected: Spectre-BHB May 17 00:06:41.890519 kernel: CPU features: kernel page table isolation forced ON by KASLR May 17 00:06:41.890528 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 17 00:06:41.890535 kernel: CPU features: detected: ARM erratum 1418040 May 17 00:06:41.890543 kernel: CPU features: detected: SSBS not fully self-synchronizing May 17 00:06:41.890549 kernel: alternatives: applying boot alternatives May 17 00:06:41.890558 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=3554ca41327a0c5ba7e4ac1b3147487d73f35805806dcb20264133a9c301eb5d May 17 00:06:41.890565 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:06:41.890572 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:06:41.890712 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:06:41.890724 kernel: Fallback order for Node 0: 0 May 17 00:06:41.890731 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 May 17 00:06:41.890738 kernel: Policy zone: Normal May 17 00:06:41.890750 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:06:41.890757 kernel: software IO TLB: area num 2. May 17 00:06:41.890786 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) May 17 00:06:41.890794 kernel: Memory: 3882868K/4096000K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 213132K reserved, 0K cma-reserved) May 17 00:06:41.890801 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:06:41.890808 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:06:41.890815 kernel: rcu: RCU event tracing is enabled. May 17 00:06:41.890822 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:06:41.890829 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:06:41.890836 kernel: Tracing variant of Tasks RCU enabled. May 17 00:06:41.890843 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:06:41.890853 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:06:41.890860 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 17 00:06:41.890867 kernel: GICv3: 256 SPIs implemented May 17 00:06:41.890873 kernel: GICv3: 0 Extended SPIs implemented May 17 00:06:41.890880 kernel: Root IRQ handler: gic_handle_irq May 17 00:06:41.890887 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 17 00:06:41.890894 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 17 00:06:41.890901 kernel: ITS [mem 0x08080000-0x0809ffff] May 17 00:06:41.890908 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) May 17 00:06:41.890915 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) May 17 00:06:41.890921 kernel: GICv3: using LPI property table @0x00000001000e0000 May 17 00:06:41.890928 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 May 17 00:06:41.890937 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:06:41.890944 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:06:41.890950 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 17 00:06:41.890958 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 17 00:06:41.890964 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 17 00:06:41.890971 kernel: Console: colour dummy device 80x25 May 17 00:06:41.890979 kernel: ACPI: Core revision 20230628 May 17 00:06:41.890986 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 17 00:06:41.890993 kernel: pid_max: default: 32768 minimum: 301 May 17 00:06:41.891001 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:06:41.891010 kernel: landlock: Up and running. May 17 00:06:41.891017 kernel: SELinux: Initializing. May 17 00:06:41.891024 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:06:41.891031 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:06:41.891038 kernel: ACPI PPTT: PPTT table found, but unable to locate core 1 (1) May 17 00:06:41.891046 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:06:41.891053 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:06:41.891060 kernel: rcu: Hierarchical SRCU implementation. May 17 00:06:41.891067 kernel: rcu: Max phase no-delay instances is 400. May 17 00:06:41.891076 kernel: Platform MSI: ITS@0x8080000 domain created May 17 00:06:41.891083 kernel: PCI/MSI: ITS@0x8080000 domain created May 17 00:06:41.891090 kernel: Remapping and enabling EFI services. May 17 00:06:41.891097 kernel: smp: Bringing up secondary CPUs ... May 17 00:06:41.891104 kernel: Detected PIPT I-cache on CPU1 May 17 00:06:41.891111 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 17 00:06:41.891118 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 May 17 00:06:41.891125 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:06:41.891132 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 17 00:06:41.891142 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:06:41.891149 kernel: SMP: Total of 2 processors activated. May 17 00:06:41.891156 kernel: CPU features: detected: 32-bit EL0 Support May 17 00:06:41.891169 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 17 00:06:41.891178 kernel: CPU features: detected: Common not Private translations May 17 00:06:41.891185 kernel: CPU features: detected: CRC32 instructions May 17 00:06:41.891193 kernel: CPU features: detected: Enhanced Virtualization Traps May 17 00:06:41.891200 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 17 00:06:41.891207 kernel: CPU features: detected: LSE atomic instructions May 17 00:06:41.891215 kernel: CPU features: detected: Privileged Access Never May 17 00:06:41.891222 kernel: CPU features: detected: RAS Extension Support May 17 00:06:41.891231 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 17 00:06:41.891239 kernel: CPU: All CPU(s) started at EL1 May 17 00:06:41.891246 kernel: alternatives: applying system-wide alternatives May 17 00:06:41.891254 kernel: devtmpfs: initialized May 17 00:06:41.891261 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:06:41.891268 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:06:41.891277 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:06:41.891285 kernel: SMBIOS 3.0.0 present. May 17 00:06:41.891292 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 May 17 00:06:41.891300 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:06:41.891307 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 17 00:06:41.891315 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 17 00:06:41.891323 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 17 00:06:41.891330 kernel: audit: initializing netlink subsys (disabled) May 17 00:06:41.891338 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 May 17 00:06:41.891347 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:06:41.891354 kernel: cpuidle: using governor menu May 17 00:06:41.891362 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 17 00:06:41.891369 kernel: ASID allocator initialised with 32768 entries May 17 00:06:41.891377 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:06:41.891384 kernel: Serial: AMBA PL011 UART driver May 17 00:06:41.891391 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 17 00:06:41.891399 kernel: Modules: 0 pages in range for non-PLT usage May 17 00:06:41.891406 kernel: Modules: 509024 pages in range for PLT usage May 17 00:06:41.891416 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:06:41.891423 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:06:41.891431 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 17 00:06:41.891439 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 17 00:06:41.891446 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:06:41.891454 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:06:41.891462 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 17 00:06:41.891469 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 17 00:06:41.891476 kernel: ACPI: Added _OSI(Module Device) May 17 00:06:41.891485 kernel: ACPI: Added _OSI(Processor Device) May 17 00:06:41.891492 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:06:41.891500 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:06:41.891510 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:06:41.891518 kernel: ACPI: Interpreter enabled May 17 00:06:41.891526 kernel: ACPI: Using GIC for interrupt routing May 17 00:06:41.891533 kernel: ACPI: MCFG table detected, 1 entries May 17 00:06:41.891541 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 17 00:06:41.891549 kernel: printk: console [ttyAMA0] enabled May 17 00:06:41.891559 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:06:41.894143 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:06:41.894280 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 17 00:06:41.894349 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 17 00:06:41.894414 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 17 00:06:41.894479 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 17 00:06:41.894489 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 17 00:06:41.894503 kernel: PCI host bridge to bus 0000:00 May 17 00:06:41.894577 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 17 00:06:41.894663 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 17 00:06:41.894725 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 17 00:06:41.894803 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:06:41.894893 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 17 00:06:41.894972 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 May 17 00:06:41.895044 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] May 17 00:06:41.895111 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] May 17 00:06:41.895189 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 May 17 00:06:41.895258 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] May 17 00:06:41.895333 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 May 17 00:06:41.895400 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] May 17 00:06:41.895476 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 May 17 00:06:41.895543 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] May 17 00:06:41.895675 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 May 17 00:06:41.895749 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] May 17 00:06:41.896452 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 May 17 00:06:41.896531 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] May 17 00:06:41.896654 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 May 17 00:06:41.896728 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] May 17 00:06:41.896820 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 May 17 00:06:41.896892 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] May 17 00:06:41.896981 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 May 17 00:06:41.897082 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] May 17 00:06:41.897166 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 May 17 00:06:41.897234 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] May 17 00:06:41.897309 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 May 17 00:06:41.897375 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] May 17 00:06:41.897469 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 May 17 00:06:41.897543 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] May 17 00:06:41.897631 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 17 00:06:41.897707 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] May 17 00:06:41.897892 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 May 17 00:06:41.897970 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] May 17 00:06:41.898049 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 May 17 00:06:41.898120 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] May 17 00:06:41.898189 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] May 17 00:06:41.898269 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 May 17 00:06:41.898338 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] May 17 00:06:41.898421 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 May 17 00:06:41.898491 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] May 17 00:06:41.898559 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] May 17 00:06:41.898699 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 May 17 00:06:41.898788 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] May 17 00:06:41.898866 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] May 17 00:06:41.898948 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 May 17 00:06:41.899020 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] May 17 00:06:41.899096 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] May 17 00:06:41.899177 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] May 17 00:06:41.899252 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:06:41.899322 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 May 17 00:06:41.899388 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 May 17 00:06:41.899458 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:06:41.899524 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:06:41.899609 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 May 17 00:06:41.899679 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:06:41.899746 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 May 17 00:06:41.900213 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 17 00:06:41.900295 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:06:41.900362 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 May 17 00:06:41.900427 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 00:06:41.900496 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 May 17 00:06:41.900564 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 May 17 00:06:41.900653 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 May 17 00:06:41.900726 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 17 00:06:41.900930 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 May 17 00:06:41.901008 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 May 17 00:06:41.901080 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 17 00:06:41.901147 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 May 17 00:06:41.901238 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 May 17 00:06:41.901365 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 17 00:06:41.901451 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 May 17 00:06:41.901527 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 May 17 00:06:41.901659 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 17 00:06:41.901740 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 May 17 00:06:41.901929 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 May 17 00:06:41.902044 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] May 17 00:06:41.902129 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] May 17 00:06:41.902214 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] May 17 00:06:41.902290 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] May 17 00:06:41.902388 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] May 17 00:06:41.902464 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] May 17 00:06:41.902537 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] May 17 00:06:41.902629 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] May 17 00:06:41.902746 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] May 17 00:06:41.902939 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] May 17 00:06:41.903018 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] May 17 00:06:41.903083 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] May 17 00:06:41.903150 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] May 17 00:06:41.903214 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] May 17 00:06:41.903281 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] May 17 00:06:41.903348 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] May 17 00:06:41.903414 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] May 17 00:06:41.903514 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] May 17 00:06:41.903601 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] May 17 00:06:41.903676 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] May 17 00:06:41.903743 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] May 17 00:06:41.903827 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] May 17 00:06:41.903907 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] May 17 00:06:41.903981 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] May 17 00:06:41.904068 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] May 17 00:06:41.904153 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] May 17 00:06:41.904224 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] May 17 00:06:41.904292 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] May 17 00:06:41.904401 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] May 17 00:06:41.904479 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] May 17 00:06:41.904550 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] May 17 00:06:41.904677 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] May 17 00:06:41.904773 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] May 17 00:06:41.904847 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] May 17 00:06:41.904918 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] May 17 00:06:41.906976 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] May 17 00:06:41.907060 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] May 17 00:06:41.907127 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] May 17 00:06:41.907200 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] May 17 00:06:41.907278 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] May 17 00:06:41.907404 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 17 00:06:41.907696 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] May 17 00:06:41.907791 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 17 00:06:41.907862 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] May 17 00:06:41.907929 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] May 17 00:06:41.909525 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] May 17 00:06:41.909661 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] May 17 00:06:41.909750 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 17 00:06:41.909837 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] May 17 00:06:41.909994 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] May 17 00:06:41.910073 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] May 17 00:06:41.910151 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] May 17 00:06:41.910227 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] May 17 00:06:41.910295 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 17 00:06:41.910369 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] May 17 00:06:41.910466 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] May 17 00:06:41.910540 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] May 17 00:06:41.910647 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] May 17 00:06:41.910853 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 17 00:06:41.910954 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] May 17 00:06:41.911029 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] May 17 00:06:41.911093 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] May 17 00:06:41.911167 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] May 17 00:06:41.911235 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] May 17 00:06:41.911306 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 17 00:06:41.911372 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] May 17 00:06:41.911438 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] May 17 00:06:41.911503 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] May 17 00:06:41.911578 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] May 17 00:06:41.911699 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] May 17 00:06:41.914128 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 17 00:06:41.914241 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] May 17 00:06:41.914310 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] May 17 00:06:41.914378 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] May 17 00:06:41.914455 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] May 17 00:06:41.914524 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] May 17 00:06:41.914617 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] May 17 00:06:41.914690 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 17 00:06:41.914756 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] May 17 00:06:41.914841 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] May 17 00:06:41.914907 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] May 17 00:06:41.917004 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 17 00:06:41.917097 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] May 17 00:06:41.917161 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] May 17 00:06:41.917233 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] May 17 00:06:41.917304 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 17 00:06:41.917369 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] May 17 00:06:41.917434 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] May 17 00:06:41.917498 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] May 17 00:06:41.917566 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 17 00:06:41.917652 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 17 00:06:41.917715 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 17 00:06:41.917820 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] May 17 00:06:41.917885 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] May 17 00:06:41.917948 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] May 17 00:06:41.918016 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] May 17 00:06:41.918077 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] May 17 00:06:41.918136 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] May 17 00:06:41.918205 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] May 17 00:06:41.918272 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] May 17 00:06:41.918344 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] May 17 00:06:41.918419 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] May 17 00:06:41.918479 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] May 17 00:06:41.918540 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] May 17 00:06:41.918664 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] May 17 00:06:41.918734 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] May 17 00:06:41.920045 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] May 17 00:06:41.920144 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] May 17 00:06:41.920209 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] May 17 00:06:41.920282 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] May 17 00:06:41.920373 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] May 17 00:06:41.920451 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] May 17 00:06:41.920521 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] May 17 00:06:41.920619 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] May 17 00:06:41.920687 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] May 17 00:06:41.921922 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] May 17 00:06:41.922035 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] May 17 00:06:41.922097 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] May 17 00:06:41.922156 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] May 17 00:06:41.922166 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 17 00:06:41.922175 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 17 00:06:41.922183 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 17 00:06:41.922191 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 17 00:06:41.922199 kernel: iommu: Default domain type: Translated May 17 00:06:41.922209 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 17 00:06:41.922218 kernel: efivars: Registered efivars operations May 17 00:06:41.922226 kernel: vgaarb: loaded May 17 00:06:41.922234 kernel: clocksource: Switched to clocksource arch_sys_counter May 17 00:06:41.922241 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:06:41.922249 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:06:41.922257 kernel: pnp: PnP ACPI init May 17 00:06:41.922334 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 17 00:06:41.922348 kernel: pnp: PnP ACPI: found 1 devices May 17 00:06:41.922356 kernel: NET: Registered PF_INET protocol family May 17 00:06:41.922364 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:06:41.922372 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 00:06:41.922380 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:06:41.922388 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:06:41.922396 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 17 00:06:41.922404 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 00:06:41.922412 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:06:41.922422 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:06:41.922430 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:06:41.922506 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) May 17 00:06:41.922518 kernel: PCI: CLS 0 bytes, default 64 May 17 00:06:41.922526 kernel: kvm [1]: HYP mode not available May 17 00:06:41.922534 kernel: Initialise system trusted keyrings May 17 00:06:41.922542 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 00:06:41.922550 kernel: Key type asymmetric registered May 17 00:06:41.922558 kernel: Asymmetric key parser 'x509' registered May 17 00:06:41.922568 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 17 00:06:41.922576 kernel: io scheduler mq-deadline registered May 17 00:06:41.922584 kernel: io scheduler kyber registered May 17 00:06:41.922603 kernel: io scheduler bfq registered May 17 00:06:41.922613 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 May 17 00:06:41.922694 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 May 17 00:06:41.922788 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 May 17 00:06:41.922946 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:06:41.923029 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 May 17 00:06:41.923098 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 May 17 00:06:41.923165 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:06:41.923234 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 May 17 00:06:41.923308 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 May 17 00:06:41.923373 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:06:41.923490 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 May 17 00:06:41.923564 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 May 17 00:06:41.923673 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:06:41.923750 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 May 17 00:06:41.925387 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 May 17 00:06:41.925476 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:06:41.925560 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 May 17 00:06:41.925646 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 May 17 00:06:41.925725 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:06:41.925847 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 May 17 00:06:41.925937 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 May 17 00:06:41.926019 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:06:41.926107 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 May 17 00:06:41.926213 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 May 17 00:06:41.926288 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:06:41.926300 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 May 17 00:06:41.926368 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 May 17 00:06:41.926436 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 May 17 00:06:41.926507 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:06:41.926518 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 17 00:06:41.926526 kernel: ACPI: button: Power Button [PWRB] May 17 00:06:41.926534 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 17 00:06:41.926664 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) May 17 00:06:41.928372 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) May 17 00:06:41.928402 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:06:41.928411 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 May 17 00:06:41.928543 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) May 17 00:06:41.928557 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A May 17 00:06:41.928567 kernel: thunder_xcv, ver 1.0 May 17 00:06:41.928576 kernel: thunder_bgx, ver 1.0 May 17 00:06:41.928585 kernel: nicpf, ver 1.0 May 17 00:06:41.928640 kernel: nicvf, ver 1.0 May 17 00:06:41.928748 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 17 00:06:41.928895 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-17T00:06:41 UTC (1747440401) May 17 00:06:41.928917 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 00:06:41.928927 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 17 00:06:41.928937 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 17 00:06:41.928946 kernel: watchdog: Hard watchdog permanently disabled May 17 00:06:41.928956 kernel: NET: Registered PF_INET6 protocol family May 17 00:06:41.928965 kernel: Segment Routing with IPv6 May 17 00:06:41.928974 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:06:41.928983 kernel: NET: Registered PF_PACKET protocol family May 17 00:06:41.928992 kernel: Key type dns_resolver registered May 17 00:06:41.929004 kernel: registered taskstats version 1 May 17 00:06:41.929014 kernel: Loading compiled-in X.509 certificates May 17 00:06:41.929022 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 02f7129968574a1ae76b1ee42e7674ea1c42071b' May 17 00:06:41.929030 kernel: Key type .fscrypt registered May 17 00:06:41.929037 kernel: Key type fscrypt-provisioning registered May 17 00:06:41.929045 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:06:41.929053 kernel: ima: Allocated hash algorithm: sha1 May 17 00:06:41.929061 kernel: ima: No architecture policies found May 17 00:06:41.929069 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 17 00:06:41.929079 kernel: clk: Disabling unused clocks May 17 00:06:41.929086 kernel: Freeing unused kernel memory: 39424K May 17 00:06:41.929094 kernel: Run /init as init process May 17 00:06:41.929102 kernel: with arguments: May 17 00:06:41.929110 kernel: /init May 17 00:06:41.929117 kernel: with environment: May 17 00:06:41.929138 kernel: HOME=/ May 17 00:06:41.929146 kernel: TERM=linux May 17 00:06:41.929154 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:06:41.929166 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:06:41.929177 systemd[1]: Detected virtualization kvm. May 17 00:06:41.929186 systemd[1]: Detected architecture arm64. May 17 00:06:41.929194 systemd[1]: Running in initrd. May 17 00:06:41.929202 systemd[1]: No hostname configured, using default hostname. May 17 00:06:41.929210 systemd[1]: Hostname set to . May 17 00:06:41.929218 systemd[1]: Initializing machine ID from VM UUID. May 17 00:06:41.929228 systemd[1]: Queued start job for default target initrd.target. May 17 00:06:41.929236 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:06:41.929245 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:06:41.929254 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:06:41.929262 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:06:41.929271 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:06:41.929279 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:06:41.929290 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:06:41.929301 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:06:41.929310 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:06:41.929318 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:06:41.929326 systemd[1]: Reached target paths.target - Path Units. May 17 00:06:41.929334 systemd[1]: Reached target slices.target - Slice Units. May 17 00:06:41.929343 systemd[1]: Reached target swap.target - Swaps. May 17 00:06:41.929351 systemd[1]: Reached target timers.target - Timer Units. May 17 00:06:41.929361 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:06:41.929369 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:06:41.929378 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:06:41.929386 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:06:41.929394 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:06:41.929403 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:06:41.929411 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:06:41.929419 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:06:41.929428 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:06:41.929441 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:06:41.929451 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:06:41.929461 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:06:41.929470 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:06:41.929481 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:06:41.929491 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:06:41.929500 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:06:41.929510 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:06:41.929550 systemd-journald[237]: Collecting audit messages is disabled. May 17 00:06:41.929573 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:06:41.929586 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:06:41.929605 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:06:41.929615 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:06:41.929623 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:06:41.929634 kernel: Bridge firewalling registered May 17 00:06:41.929643 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:06:41.929654 systemd-journald[237]: Journal started May 17 00:06:41.929735 systemd-journald[237]: Runtime Journal (/run/log/journal/de06ac0728a246efb3588236393f51be) is 8.0M, max 76.6M, 68.6M free. May 17 00:06:41.905139 systemd-modules-load[238]: Inserted module 'overlay' May 17 00:06:41.926083 systemd-modules-load[238]: Inserted module 'br_netfilter' May 17 00:06:41.941481 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:06:41.941544 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:06:41.942146 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:06:41.952190 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:06:41.961063 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:06:41.965337 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:06:41.981979 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:06:41.987097 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:06:41.995107 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:06:42.000634 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:06:42.010052 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:06:42.029621 systemd-resolved[271]: Positive Trust Anchors: May 17 00:06:42.030300 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:06:42.031056 dracut-cmdline[274]: dracut-dracut-053 May 17 00:06:42.031660 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:06:42.041065 systemd-resolved[271]: Defaulting to hostname 'linux'. May 17 00:06:42.042163 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:06:42.042807 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:06:42.045912 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=3554ca41327a0c5ba7e4ac1b3147487d73f35805806dcb20264133a9c301eb5d May 17 00:06:42.139840 kernel: SCSI subsystem initialized May 17 00:06:42.144806 kernel: Loading iSCSI transport class v2.0-870. May 17 00:06:42.152994 kernel: iscsi: registered transport (tcp) May 17 00:06:42.166819 kernel: iscsi: registered transport (qla4xxx) May 17 00:06:42.166915 kernel: QLogic iSCSI HBA Driver May 17 00:06:42.213686 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:06:42.225130 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:06:42.248956 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:06:42.249070 kernel: device-mapper: uevent: version 1.0.3 May 17 00:06:42.249096 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:06:42.299834 kernel: raid6: neonx8 gen() 15525 MB/s May 17 00:06:42.316879 kernel: raid6: neonx4 gen() 15465 MB/s May 17 00:06:42.334059 kernel: raid6: neonx2 gen() 13022 MB/s May 17 00:06:42.350825 kernel: raid6: neonx1 gen() 10387 MB/s May 17 00:06:42.367808 kernel: raid6: int64x8 gen() 6895 MB/s May 17 00:06:42.384818 kernel: raid6: int64x4 gen() 7249 MB/s May 17 00:06:42.401843 kernel: raid6: int64x2 gen() 6046 MB/s May 17 00:06:42.418856 kernel: raid6: int64x1 gen() 5002 MB/s May 17 00:06:42.418959 kernel: raid6: using algorithm neonx8 gen() 15525 MB/s May 17 00:06:42.436198 kernel: raid6: .... xor() 11756 MB/s, rmw enabled May 17 00:06:42.436317 kernel: raid6: using neon recovery algorithm May 17 00:06:42.440795 kernel: xor: measuring software checksum speed May 17 00:06:42.440859 kernel: 8regs : 19754 MB/sec May 17 00:06:42.440874 kernel: 32regs : 17677 MB/sec May 17 00:06:42.442107 kernel: arm64_neon : 26954 MB/sec May 17 00:06:42.442166 kernel: xor: using function: arm64_neon (26954 MB/sec) May 17 00:06:42.493853 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:06:42.508333 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:06:42.516116 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:06:42.546908 systemd-udevd[456]: Using default interface naming scheme 'v255'. May 17 00:06:42.551108 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:06:42.560113 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:06:42.577314 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation May 17 00:06:42.621533 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:06:42.629064 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:06:42.682241 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:06:42.690228 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:06:42.716362 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:06:42.718352 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:06:42.720169 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:06:42.721877 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:06:42.729023 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:06:42.743685 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:06:42.776789 kernel: scsi host0: Virtio SCSI HBA May 17 00:06:42.779790 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 May 17 00:06:42.779867 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 17 00:06:42.820829 kernel: ACPI: bus type USB registered May 17 00:06:42.820884 kernel: usbcore: registered new interface driver usbfs May 17 00:06:42.826780 kernel: usbcore: registered new interface driver hub May 17 00:06:42.826838 kernel: usbcore: registered new device driver usb May 17 00:06:42.837149 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:06:42.837274 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:06:42.839098 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:06:42.842332 kernel: sr 0:0:0:0: Power-on or device reset occurred May 17 00:06:42.842262 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:06:42.842463 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:06:42.843137 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:06:42.850884 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray May 17 00:06:42.851101 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 00:06:42.851084 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:06:42.856823 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 May 17 00:06:42.858199 kernel: sd 0:0:0:1: Power-on or device reset occurred May 17 00:06:42.858358 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) May 17 00:06:42.860683 kernel: sd 0:0:0:1: [sda] Write Protect is off May 17 00:06:42.861404 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 May 17 00:06:42.861499 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 00:06:42.867900 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:06:42.867962 kernel: GPT:17805311 != 80003071 May 17 00:06:42.867974 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:06:42.867984 kernel: GPT:17805311 != 80003071 May 17 00:06:42.867993 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:06:42.868799 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:06:42.870858 kernel: sd 0:0:0:1: [sda] Attached SCSI disk May 17 00:06:42.876049 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 17 00:06:42.876266 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 May 17 00:06:42.879791 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 May 17 00:06:42.882141 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 17 00:06:42.882343 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 May 17 00:06:42.880812 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:06:42.888879 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed May 17 00:06:42.889103 kernel: hub 1-0:1.0: USB hub found May 17 00:06:42.889244 kernel: hub 1-0:1.0: 4 ports detected May 17 00:06:42.889328 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 17 00:06:42.889434 kernel: hub 2-0:1.0: USB hub found May 17 00:06:42.889522 kernel: hub 2-0:1.0: 4 ports detected May 17 00:06:42.889952 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:06:42.917809 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:06:42.930812 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (500) May 17 00:06:42.941789 kernel: BTRFS: device fsid 4797bc80-d55e-4b4a-8ede-cb88964b0162 devid 1 transid 43 /dev/sda3 scanned by (udev-worker) (501) May 17 00:06:42.944100 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 17 00:06:42.952213 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 17 00:06:42.960158 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 17 00:06:42.965452 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 17 00:06:42.967822 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 17 00:06:42.975991 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:06:42.983890 disk-uuid[573]: Primary Header is updated. May 17 00:06:42.983890 disk-uuid[573]: Secondary Entries is updated. May 17 00:06:42.983890 disk-uuid[573]: Secondary Header is updated. May 17 00:06:42.990818 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:06:43.129870 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd May 17 00:06:43.267817 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 May 17 00:06:43.267906 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 May 17 00:06:43.268138 kernel: usbcore: registered new interface driver usbhid May 17 00:06:43.268828 kernel: usbhid: USB HID core driver May 17 00:06:43.374945 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd May 17 00:06:43.503806 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 May 17 00:06:43.556798 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 May 17 00:06:44.007835 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:06:44.008169 disk-uuid[574]: The operation has completed successfully. May 17 00:06:44.058230 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:06:44.059149 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:06:44.083121 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:06:44.088943 sh[592]: Success May 17 00:06:44.107789 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 17 00:06:44.177043 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:06:44.190956 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:06:44.193042 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:06:44.217121 kernel: BTRFS info (device dm-0): first mount of filesystem 4797bc80-d55e-4b4a-8ede-cb88964b0162 May 17 00:06:44.217208 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 17 00:06:44.217234 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:06:44.218808 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:06:44.218862 kernel: BTRFS info (device dm-0): using free space tree May 17 00:06:44.229796 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 00:06:44.233570 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:06:44.235314 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:06:44.243084 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:06:44.246099 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:06:44.259299 kernel: BTRFS info (device sda6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:06:44.259367 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:06:44.259380 kernel: BTRFS info (device sda6): using free space tree May 17 00:06:44.263794 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:06:44.263866 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:06:44.277859 kernel: BTRFS info (device sda6): last unmount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:06:44.277711 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:06:44.288326 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:06:44.297499 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:06:44.377650 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:06:44.389957 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:06:44.410419 systemd-networkd[778]: lo: Link UP May 17 00:06:44.410432 systemd-networkd[778]: lo: Gained carrier May 17 00:06:44.412709 systemd-networkd[778]: Enumeration completed May 17 00:06:44.413255 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:06:44.413494 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:06:44.413498 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:06:44.416912 ignition[681]: Ignition 2.19.0 May 17 00:06:44.414782 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:06:44.416920 ignition[681]: Stage: fetch-offline May 17 00:06:44.414786 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:06:44.416979 ignition[681]: no configs at "/usr/lib/ignition/base.d" May 17 00:06:44.416083 systemd-networkd[778]: eth0: Link UP May 17 00:06:44.416988 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:06:44.416087 systemd-networkd[778]: eth0: Gained carrier May 17 00:06:44.417184 ignition[681]: parsed url from cmdline: "" May 17 00:06:44.416099 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:06:44.417187 ignition[681]: no config URL provided May 17 00:06:44.416825 systemd[1]: Reached target network.target - Network. May 17 00:06:44.417191 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:06:44.419674 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:06:44.417201 ignition[681]: no config at "/usr/lib/ignition/user.ign" May 17 00:06:44.422037 systemd-networkd[778]: eth1: Link UP May 17 00:06:44.417206 ignition[681]: failed to fetch config: resource requires networking May 17 00:06:44.422042 systemd-networkd[778]: eth1: Gained carrier May 17 00:06:44.417408 ignition[681]: Ignition finished successfully May 17 00:06:44.422052 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:06:44.428033 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 17 00:06:44.445526 ignition[782]: Ignition 2.19.0 May 17 00:06:44.445538 ignition[782]: Stage: fetch May 17 00:06:44.445741 ignition[782]: no configs at "/usr/lib/ignition/base.d" May 17 00:06:44.445752 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:06:44.445889 ignition[782]: parsed url from cmdline: "" May 17 00:06:44.445893 ignition[782]: no config URL provided May 17 00:06:44.445900 ignition[782]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:06:44.445908 ignition[782]: no config at "/usr/lib/ignition/user.ign" May 17 00:06:44.445934 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 May 17 00:06:44.446498 ignition[782]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable May 17 00:06:44.452850 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:06:44.477879 systemd-networkd[778]: eth0: DHCPv4 address 188.245.126.139/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 17 00:06:44.647021 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 May 17 00:06:44.651811 ignition[782]: GET result: OK May 17 00:06:44.651991 ignition[782]: parsing config with SHA512: 2f7e536c430172a2a29eeb4b274ff98bac02545992cbca79b6ef46b8824c936d3be324b1f0290dbbc5256bdb127ca6ef1dbd16a019f2731e0aa94c3a8e9f3088 May 17 00:06:44.657047 unknown[782]: fetched base config from "system" May 17 00:06:44.657058 unknown[782]: fetched base config from "system" May 17 00:06:44.657522 ignition[782]: fetch: fetch complete May 17 00:06:44.657064 unknown[782]: fetched user config from "hetzner" May 17 00:06:44.657529 ignition[782]: fetch: fetch passed May 17 00:06:44.657576 ignition[782]: Ignition finished successfully May 17 00:06:44.660211 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 17 00:06:44.666057 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:06:44.680357 ignition[789]: Ignition 2.19.0 May 17 00:06:44.680368 ignition[789]: Stage: kargs May 17 00:06:44.680661 ignition[789]: no configs at "/usr/lib/ignition/base.d" May 17 00:06:44.680671 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:06:44.681840 ignition[789]: kargs: kargs passed May 17 00:06:44.681958 ignition[789]: Ignition finished successfully May 17 00:06:44.684077 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:06:44.690040 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:06:44.705674 ignition[795]: Ignition 2.19.0 May 17 00:06:44.705688 ignition[795]: Stage: disks May 17 00:06:44.705887 ignition[795]: no configs at "/usr/lib/ignition/base.d" May 17 00:06:44.705897 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:06:44.706882 ignition[795]: disks: disks passed May 17 00:06:44.711894 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:06:44.706937 ignition[795]: Ignition finished successfully May 17 00:06:44.713063 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:06:44.713623 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:06:44.714631 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:06:44.715402 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:06:44.716272 systemd[1]: Reached target basic.target - Basic System. May 17 00:06:44.724037 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:06:44.746065 systemd-fsck[804]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 17 00:06:44.750667 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:06:44.755956 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:06:44.800049 kernel: EXT4-fs (sda9): mounted filesystem 50a777b7-c00f-4923-84ce-1c186fc0fd3b r/w with ordered data mode. Quota mode: none. May 17 00:06:44.800657 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:06:44.801796 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:06:44.814048 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:06:44.818089 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:06:44.820428 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 17 00:06:44.823052 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:06:44.823089 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:06:44.831125 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:06:44.838356 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (812) May 17 00:06:44.837618 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:06:44.841784 kernel: BTRFS info (device sda6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:06:44.842786 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:06:44.842884 kernel: BTRFS info (device sda6): using free space tree May 17 00:06:44.849550 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:06:44.849651 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:06:44.854388 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:06:44.898873 coreos-metadata[814]: May 17 00:06:44.898 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 May 17 00:06:44.901270 coreos-metadata[814]: May 17 00:06:44.901 INFO Fetch successful May 17 00:06:44.902895 coreos-metadata[814]: May 17 00:06:44.902 INFO wrote hostname ci-4081-3-3-n-e61ddff57a to /sysroot/etc/hostname May 17 00:06:44.906076 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:06:44.907613 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:06:44.914078 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory May 17 00:06:44.921081 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:06:44.926981 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:06:45.031158 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:06:45.043158 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:06:45.047041 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:06:45.057787 kernel: BTRFS info (device sda6): last unmount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:06:45.076149 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:06:45.081748 ignition[930]: INFO : Ignition 2.19.0 May 17 00:06:45.081748 ignition[930]: INFO : Stage: mount May 17 00:06:45.082899 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:06:45.082899 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:06:45.084929 ignition[930]: INFO : mount: mount passed May 17 00:06:45.084929 ignition[930]: INFO : Ignition finished successfully May 17 00:06:45.085166 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:06:45.089945 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:06:45.217239 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:06:45.225059 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:06:45.237809 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (941) May 17 00:06:45.239891 kernel: BTRFS info (device sda6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:06:45.239959 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:06:45.239980 kernel: BTRFS info (device sda6): using free space tree May 17 00:06:45.244062 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:06:45.244144 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:06:45.247145 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:06:45.272818 ignition[958]: INFO : Ignition 2.19.0 May 17 00:06:45.272818 ignition[958]: INFO : Stage: files May 17 00:06:45.272818 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:06:45.272818 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:06:45.275644 ignition[958]: DEBUG : files: compiled without relabeling support, skipping May 17 00:06:45.275644 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:06:45.275644 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:06:45.278447 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:06:45.279421 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:06:45.279421 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:06:45.278896 unknown[958]: wrote ssh authorized keys file for user: core May 17 00:06:45.281669 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 17 00:06:45.281669 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 17 00:06:45.863091 systemd-networkd[778]: eth1: Gained IPv6LL May 17 00:06:45.926024 systemd-networkd[778]: eth0: Gained IPv6LL May 17 00:06:47.060011 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:06:52.683404 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 17 00:06:52.683404 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 17 00:06:52.686399 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:06:52.686399 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:06:52.686399 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:06:52.686399 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:06:52.686399 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:06:52.686399 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:06:52.686399 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:06:52.686399 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:06:52.686399 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:06:52.686399 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:06:52.686399 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:06:52.686399 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:06:52.686399 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 May 17 00:06:53.330104 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 17 00:06:53.535929 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:06:53.535929 ignition[958]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 17 00:06:53.538925 ignition[958]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:06:53.538925 ignition[958]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:06:53.538925 ignition[958]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 17 00:06:53.538925 ignition[958]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 17 00:06:53.538925 ignition[958]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 17 00:06:53.538925 ignition[958]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 17 00:06:53.538925 ignition[958]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 17 00:06:53.538925 ignition[958]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" May 17 00:06:53.548205 ignition[958]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:06:53.548205 ignition[958]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:06:53.548205 ignition[958]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:06:53.548205 ignition[958]: INFO : files: files passed May 17 00:06:53.548205 ignition[958]: INFO : Ignition finished successfully May 17 00:06:53.542108 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:06:53.548084 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:06:53.556296 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:06:53.562302 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:06:53.562401 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:06:53.569247 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:06:53.569247 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:06:53.572210 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:06:53.574588 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:06:53.576049 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:06:53.582063 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:06:53.639323 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:06:53.639479 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:06:53.641014 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:06:53.642021 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:06:53.643048 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:06:53.647951 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:06:53.665032 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:06:53.672072 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:06:53.685140 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:06:53.686536 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:06:53.687325 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:06:53.688483 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:06:53.688621 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:06:53.690400 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:06:53.691010 systemd[1]: Stopped target basic.target - Basic System. May 17 00:06:53.692264 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:06:53.693214 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:06:53.694115 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:06:53.695101 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:06:53.696136 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:06:53.697383 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:06:53.698310 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:06:53.699284 systemd[1]: Stopped target swap.target - Swaps. May 17 00:06:53.700089 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:06:53.700209 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:06:53.701318 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:06:53.701944 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:06:53.702840 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:06:53.702913 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:06:53.704051 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:06:53.704170 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:06:53.705522 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:06:53.705644 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:06:53.706695 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:06:53.706810 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:06:53.707742 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:06:53.707854 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:06:53.717032 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:06:53.721587 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:06:53.722739 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:06:53.722946 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:06:53.725444 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:06:53.725583 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:06:53.736812 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:06:53.739118 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:06:53.744455 ignition[1011]: INFO : Ignition 2.19.0 May 17 00:06:53.744455 ignition[1011]: INFO : Stage: umount May 17 00:06:53.748232 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:06:53.748232 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:06:53.748232 ignition[1011]: INFO : umount: umount passed May 17 00:06:53.748232 ignition[1011]: INFO : Ignition finished successfully May 17 00:06:53.748833 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:06:53.748953 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:06:53.752568 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:06:53.753141 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:06:53.753185 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:06:53.755324 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:06:53.755391 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:06:53.756316 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:06:53.756354 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 17 00:06:53.757180 systemd[1]: Stopped target network.target - Network. May 17 00:06:53.757940 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:06:53.757985 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:06:53.759219 systemd[1]: Stopped target paths.target - Path Units. May 17 00:06:53.760076 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:06:53.763857 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:06:53.766045 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:06:53.767161 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:06:53.768296 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:06:53.768340 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:06:53.769216 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:06:53.769249 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:06:53.770104 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:06:53.770154 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:06:53.771036 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:06:53.771073 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:06:53.772115 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:06:53.774959 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:06:53.777267 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:06:53.777367 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:06:53.778890 systemd-networkd[778]: eth1: DHCPv6 lease lost May 17 00:06:53.779700 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:06:53.779845 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:06:53.783124 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:06:53.783233 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:06:53.783986 systemd-networkd[778]: eth0: DHCPv6 lease lost May 17 00:06:53.787396 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:06:53.787529 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:06:53.789682 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:06:53.790076 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:06:53.798308 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:06:53.798923 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:06:53.798989 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:06:53.799972 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:06:53.800018 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:06:53.801000 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:06:53.801043 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:06:53.802175 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:06:53.802222 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:06:53.803837 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:06:53.814814 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:06:53.816432 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:06:53.818351 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:06:53.818609 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:06:53.821694 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:06:53.821831 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:06:53.824118 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:06:53.824174 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:06:53.825214 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:06:53.825264 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:06:53.826915 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:06:53.826962 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:06:53.828120 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:06:53.828164 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:06:53.835996 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:06:53.836518 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:06:53.836611 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:06:53.837756 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 17 00:06:53.839904 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:06:53.840622 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:06:53.840668 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:06:53.844520 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:06:53.844599 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:06:53.848107 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:06:53.849047 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:06:53.851200 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:06:53.856980 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:06:53.867936 systemd[1]: Switching root. May 17 00:06:53.906118 systemd-journald[237]: Journal stopped May 17 00:06:54.862874 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). May 17 00:06:54.863007 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:06:54.863041 kernel: SELinux: policy capability open_perms=1 May 17 00:06:54.863063 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:06:54.863083 kernel: SELinux: policy capability always_check_network=0 May 17 00:06:54.863104 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:06:54.863125 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:06:54.863161 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:06:54.863187 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:06:54.863209 kernel: audit: type=1403 audit(1747440414.071:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:06:54.863231 systemd[1]: Successfully loaded SELinux policy in 37.304ms. May 17 00:06:54.863312 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.890ms. May 17 00:06:54.863339 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:06:54.863364 systemd[1]: Detected virtualization kvm. May 17 00:06:54.863389 systemd[1]: Detected architecture arm64. May 17 00:06:54.863412 systemd[1]: Detected first boot. May 17 00:06:54.863437 systemd[1]: Hostname set to . May 17 00:06:54.863462 systemd[1]: Initializing machine ID from VM UUID. May 17 00:06:54.863486 zram_generator::config[1054]: No configuration found. May 17 00:06:54.863514 systemd[1]: Populated /etc with preset unit settings. May 17 00:06:54.863537 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:06:54.863574 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 17 00:06:54.863597 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:06:54.863625 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 00:06:54.863656 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 00:06:54.863679 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 00:06:54.863702 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 00:06:54.863727 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 00:06:54.863751 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 00:06:54.870009 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 00:06:54.870055 systemd[1]: Created slice user.slice - User and Session Slice. May 17 00:06:54.870094 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:06:54.870120 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:06:54.870144 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 00:06:54.870168 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 00:06:54.870192 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 00:06:54.870221 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:06:54.870244 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 17 00:06:54.870267 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:06:54.870305 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 17 00:06:54.870330 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 17 00:06:54.870354 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 17 00:06:54.870380 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 00:06:54.870405 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:06:54.870429 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:06:54.870453 systemd[1]: Reached target slices.target - Slice Units. May 17 00:06:54.870478 systemd[1]: Reached target swap.target - Swaps. May 17 00:06:54.870502 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 00:06:54.870532 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 00:06:54.870571 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:06:54.870597 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:06:54.870625 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:06:54.870649 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 00:06:54.870673 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 00:06:54.870697 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 00:06:54.870724 systemd[1]: Mounting media.mount - External Media Directory... May 17 00:06:54.870749 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 00:06:54.870863 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 00:06:54.870893 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 00:06:54.870927 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:06:54.870974 systemd[1]: Reached target machines.target - Containers. May 17 00:06:54.871010 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 00:06:54.871042 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:06:54.871072 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:06:54.871106 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 00:06:54.871142 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:06:54.871175 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:06:54.871204 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:06:54.871233 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 00:06:54.871262 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:06:54.871290 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:06:54.871321 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:06:54.871353 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 17 00:06:54.871387 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:06:54.871425 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:06:54.871450 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:06:54.871478 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:06:54.871505 kernel: loop: module loaded May 17 00:06:54.871537 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 00:06:54.871596 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 00:06:54.871624 kernel: fuse: init (API version 7.39) May 17 00:06:54.872847 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:06:54.872905 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:06:54.872947 systemd[1]: Stopped verity-setup.service. May 17 00:06:54.872976 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 00:06:54.873005 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 00:06:54.873059 systemd[1]: Mounted media.mount - External Media Directory. May 17 00:06:54.873116 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 00:06:54.873187 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 00:06:54.873215 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 00:06:54.873243 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:06:54.873269 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:06:54.873296 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 00:06:54.873323 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:06:54.873350 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:06:54.873380 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:06:54.873411 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:06:54.873443 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:06:54.873471 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 00:06:54.873498 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:06:54.873524 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:06:54.873571 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 00:06:54.873604 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 00:06:54.873631 kernel: ACPI: bus type drm_connector registered May 17 00:06:54.873657 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 00:06:54.873685 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 00:06:54.873718 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 00:06:54.873749 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:06:54.873799 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:06:54.873831 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 00:06:54.873863 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 00:06:54.873901 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 00:06:54.873987 systemd-journald[1117]: Collecting audit messages is disabled. May 17 00:06:54.874092 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:06:54.874118 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 00:06:54.874145 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:06:54.874177 systemd-journald[1117]: Journal started May 17 00:06:54.874227 systemd-journald[1117]: Runtime Journal (/run/log/journal/de06ac0728a246efb3588236393f51be) is 8.0M, max 76.6M, 68.6M free. May 17 00:06:54.879300 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 00:06:54.569151 systemd[1]: Queued start job for default target multi-user.target. May 17 00:06:54.886026 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:06:54.593746 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 17 00:06:54.594209 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:06:54.909105 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 00:06:54.909158 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:06:54.909173 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:06:54.901062 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:06:54.901237 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:06:54.903332 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:06:54.904176 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 00:06:54.906037 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 00:06:54.908159 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 00:06:54.918905 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 00:06:54.931924 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 00:06:54.947080 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 00:06:54.957118 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 00:06:54.960650 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:06:54.994783 kernel: loop0: detected capacity change from 0 to 114432 May 17 00:06:54.995505 systemd-journald[1117]: Time spent on flushing to /var/log/journal/de06ac0728a246efb3588236393f51be is 61.507ms for 1131 entries. May 17 00:06:54.995505 systemd-journald[1117]: System Journal (/var/log/journal/de06ac0728a246efb3588236393f51be) is 8.0M, max 584.8M, 576.8M free. May 17 00:06:55.068668 systemd-journald[1117]: Received client request to flush runtime journal. May 17 00:06:55.068713 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:06:55.015925 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 00:06:55.024093 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:06:55.027840 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 00:06:55.029011 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:06:55.029893 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:06:55.044027 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 00:06:55.071393 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 00:06:55.076830 kernel: loop1: detected capacity change from 0 to 203944 May 17 00:06:55.079201 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 17 00:06:55.082857 systemd-tmpfiles[1143]: ACLs are not supported, ignoring. May 17 00:06:55.082890 systemd-tmpfiles[1143]: ACLs are not supported, ignoring. May 17 00:06:55.098370 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:06:55.109064 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 00:06:55.132785 kernel: loop2: detected capacity change from 0 to 114328 May 17 00:06:55.167205 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 00:06:55.173345 kernel: loop3: detected capacity change from 0 to 8 May 17 00:06:55.179962 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:06:55.192793 kernel: loop4: detected capacity change from 0 to 114432 May 17 00:06:55.207952 kernel: loop5: detected capacity change from 0 to 203944 May 17 00:06:55.210172 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. May 17 00:06:55.210718 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. May 17 00:06:55.219391 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:06:55.225854 kernel: loop6: detected capacity change from 0 to 114328 May 17 00:06:55.240862 kernel: loop7: detected capacity change from 0 to 8 May 17 00:06:55.245898 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. May 17 00:06:55.246461 (sd-merge)[1195]: Merged extensions into '/usr'. May 17 00:06:55.253230 systemd[1]: Reloading requested from client PID 1142 ('systemd-sysext') (unit systemd-sysext.service)... May 17 00:06:55.253382 systemd[1]: Reloading... May 17 00:06:55.354787 zram_generator::config[1226]: No configuration found. May 17 00:06:55.523936 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:06:55.529806 ldconfig[1136]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:06:55.572816 systemd[1]: Reloading finished in 316 ms. May 17 00:06:55.615825 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 00:06:55.617302 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 00:06:55.626115 systemd[1]: Starting ensure-sysext.service... May 17 00:06:55.634743 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:06:55.648005 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... May 17 00:06:55.648126 systemd[1]: Reloading... May 17 00:06:55.674530 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:06:55.677227 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 00:06:55.678177 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:06:55.679085 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. May 17 00:06:55.679594 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. May 17 00:06:55.684837 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:06:55.684956 systemd-tmpfiles[1261]: Skipping /boot May 17 00:06:55.696831 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:06:55.696842 systemd-tmpfiles[1261]: Skipping /boot May 17 00:06:55.748798 zram_generator::config[1294]: No configuration found. May 17 00:06:55.854064 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:06:55.900740 systemd[1]: Reloading finished in 252 ms. May 17 00:06:55.923431 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 00:06:55.929799 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:06:55.943315 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:06:55.950137 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 00:06:55.954153 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 00:06:55.959183 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:06:55.962718 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:06:55.974106 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 00:06:55.978449 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:06:55.988072 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:06:55.991932 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:06:55.997143 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:06:55.997997 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:06:56.004159 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 00:06:56.009144 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:06:56.009307 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:06:56.010794 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:06:56.011174 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:06:56.023290 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:06:56.028219 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:06:56.035891 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:06:56.036612 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:06:56.037323 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:06:56.039818 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:06:56.053436 systemd[1]: Finished ensure-sysext.service. May 17 00:06:56.061205 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 17 00:06:56.062507 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 00:06:56.079088 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 00:06:56.082111 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 00:06:56.083277 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 00:06:56.084308 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 00:06:56.091257 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:06:56.102002 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:06:56.102175 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:06:56.103421 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:06:56.104342 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:06:56.106092 systemd-udevd[1333]: Using default interface naming scheme 'v255'. May 17 00:06:56.107915 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:06:56.108096 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:06:56.109647 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:06:56.109755 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:06:56.115485 augenrules[1370]: No rules May 17 00:06:56.123323 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:06:56.126880 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 00:06:56.138055 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:06:56.150051 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:06:56.211508 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 17 00:06:56.259143 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 17 00:06:56.259892 systemd[1]: Reached target time-set.target - System Time Set. May 17 00:06:56.274043 systemd-resolved[1332]: Positive Trust Anchors: May 17 00:06:56.274066 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:06:56.274098 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:06:56.285916 systemd-resolved[1332]: Using system hostname 'ci-4081-3-3-n-e61ddff57a'. May 17 00:06:56.288667 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:06:56.289967 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:06:56.323010 systemd-networkd[1384]: lo: Link UP May 17 00:06:56.323306 systemd-networkd[1384]: lo: Gained carrier May 17 00:06:56.325500 systemd-networkd[1384]: Enumeration completed May 17 00:06:56.325975 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:06:56.326837 systemd-networkd[1384]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:06:56.326922 systemd-networkd[1384]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:06:56.327171 systemd[1]: Reached target network.target - Network. May 17 00:06:56.328147 systemd-networkd[1384]: eth1: Link UP May 17 00:06:56.328228 systemd-networkd[1384]: eth1: Gained carrier May 17 00:06:56.328290 systemd-networkd[1384]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:06:56.334993 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 00:06:56.356891 systemd-networkd[1384]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:06:56.358164 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. May 17 00:06:56.367989 systemd-networkd[1384]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:06:56.387790 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:06:56.419799 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. May 17 00:06:56.419925 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:06:56.440084 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:06:56.444338 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:06:56.458949 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:06:56.459557 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:06:56.459591 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:06:56.460032 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:06:56.460185 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:06:56.461752 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:06:56.463000 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:06:56.466215 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:06:56.468706 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:06:56.469802 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:06:56.472684 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:06:56.480569 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:06:56.480677 systemd-networkd[1384]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:06:56.481402 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. May 17 00:06:56.481591 systemd-networkd[1384]: eth0: Link UP May 17 00:06:56.482698 systemd-networkd[1384]: eth0: Gained carrier May 17 00:06:56.482725 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:06:56.494167 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. May 17 00:06:56.502822 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 43 scanned by (udev-worker) (1383) May 17 00:06:56.530089 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 May 17 00:06:56.530156 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 17 00:06:56.530169 kernel: [drm] features: -context_init May 17 00:06:56.532937 kernel: [drm] number of scanouts: 1 May 17 00:06:56.533010 kernel: [drm] number of cap sets: 0 May 17 00:06:56.536917 systemd-networkd[1384]: eth0: DHCPv4 address 188.245.126.139/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 17 00:06:56.537389 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. May 17 00:06:56.537874 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. May 17 00:06:56.542795 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 May 17 00:06:56.549573 kernel: Console: switching to colour frame buffer device 160x50 May 17 00:06:56.549220 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:06:56.556827 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 17 00:06:56.560640 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 17 00:06:56.572911 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 00:06:56.580350 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:06:56.580556 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:06:56.590159 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:06:56.609586 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 00:06:56.672600 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:06:56.686600 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 00:06:56.693195 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 00:06:56.711596 lvm[1444]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:06:56.742870 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 00:06:56.745082 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:06:56.746555 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:06:56.748155 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 00:06:56.748979 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 00:06:56.749852 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 00:06:56.750511 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 00:06:56.751341 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 00:06:56.752518 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:06:56.752571 systemd[1]: Reached target paths.target - Path Units. May 17 00:06:56.753095 systemd[1]: Reached target timers.target - Timer Units. May 17 00:06:56.754654 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 00:06:56.756861 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 00:06:56.765386 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 00:06:56.767975 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 00:06:56.769408 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 00:06:56.770191 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:06:56.770791 systemd[1]: Reached target basic.target - Basic System. May 17 00:06:56.771377 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 00:06:56.771412 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 00:06:56.778059 systemd[1]: Starting containerd.service - containerd container runtime... May 17 00:06:56.784103 lvm[1448]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:06:56.784882 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 17 00:06:56.787135 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 00:06:56.794053 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 00:06:56.798071 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 00:06:56.798636 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 00:06:56.803172 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 00:06:56.806985 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 00:06:56.809194 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. May 17 00:06:56.817999 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 00:06:56.824035 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 00:06:56.828432 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 00:06:56.830661 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:06:56.831633 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:06:56.835996 systemd[1]: Starting update-engine.service - Update Engine... May 17 00:06:56.841960 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 00:06:56.853260 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 00:06:56.868998 extend-filesystems[1453]: Found loop4 May 17 00:06:56.870471 extend-filesystems[1453]: Found loop5 May 17 00:06:56.870977 extend-filesystems[1453]: Found loop6 May 17 00:06:56.871373 extend-filesystems[1453]: Found loop7 May 17 00:06:56.872253 extend-filesystems[1453]: Found sda May 17 00:06:56.872253 extend-filesystems[1453]: Found sda1 May 17 00:06:56.872253 extend-filesystems[1453]: Found sda2 May 17 00:06:56.872253 extend-filesystems[1453]: Found sda3 May 17 00:06:56.872253 extend-filesystems[1453]: Found usr May 17 00:06:56.872253 extend-filesystems[1453]: Found sda4 May 17 00:06:56.872253 extend-filesystems[1453]: Found sda6 May 17 00:06:56.872253 extend-filesystems[1453]: Found sda7 May 17 00:06:56.872253 extend-filesystems[1453]: Found sda9 May 17 00:06:56.872253 extend-filesystems[1453]: Checking size of /dev/sda9 May 17 00:06:56.883087 jq[1452]: false May 17 00:06:56.874974 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:06:56.876858 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 00:06:56.887399 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 00:06:56.887184 dbus-daemon[1451]: [system] SELinux support is enabled May 17 00:06:56.892907 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:06:56.892957 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 00:06:56.894901 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:06:56.894937 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 00:06:56.896033 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:06:56.898242 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 00:06:56.914235 jq[1463]: true May 17 00:06:56.926637 (ntainerd)[1482]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 00:06:56.927465 extend-filesystems[1453]: Resized partition /dev/sda9 May 17 00:06:56.931101 extend-filesystems[1486]: resize2fs 1.47.1 (20-May-2024) May 17 00:06:56.945786 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks May 17 00:06:56.953519 tar[1471]: linux-arm64/helm May 17 00:06:56.951247 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:06:56.952887 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 00:06:56.972990 coreos-metadata[1450]: May 17 00:06:56.972 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 May 17 00:06:56.978609 coreos-metadata[1450]: May 17 00:06:56.978 INFO Fetch successful May 17 00:06:56.978609 coreos-metadata[1450]: May 17 00:06:56.978 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 May 17 00:06:56.980599 jq[1488]: true May 17 00:06:56.986921 coreos-metadata[1450]: May 17 00:06:56.985 INFO Fetch successful May 17 00:06:57.003455 update_engine[1462]: I20250517 00:06:57.001032 1462 main.cc:92] Flatcar Update Engine starting May 17 00:06:57.009566 systemd-logind[1461]: New seat seat0. May 17 00:06:57.014584 systemd-logind[1461]: Watching system buttons on /dev/input/event0 (Power Button) May 17 00:06:57.019461 update_engine[1462]: I20250517 00:06:57.015414 1462 update_check_scheduler.cc:74] Next update check in 11m47s May 17 00:06:57.014615 systemd-logind[1461]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) May 17 00:06:57.015060 systemd[1]: Started update-engine.service - Update Engine. May 17 00:06:57.031038 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 00:06:57.032226 systemd[1]: Started systemd-logind.service - User Login Management. May 17 00:06:57.118272 kernel: EXT4-fs (sda9): resized filesystem to 9393147 May 17 00:06:57.118337 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 43 scanned by (udev-worker) (1386) May 17 00:06:57.140026 extend-filesystems[1486]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 17 00:06:57.140026 extend-filesystems[1486]: old_desc_blocks = 1, new_desc_blocks = 5 May 17 00:06:57.140026 extend-filesystems[1486]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. May 17 00:06:57.143075 extend-filesystems[1453]: Resized filesystem in /dev/sda9 May 17 00:06:57.143075 extend-filesystems[1453]: Found sr0 May 17 00:06:57.146370 bash[1519]: Updated "/home/core/.ssh/authorized_keys" May 17 00:06:57.143428 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:06:57.143685 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 00:06:57.151816 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 00:06:57.182112 systemd[1]: Starting sshkeys.service... May 17 00:06:57.197366 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 17 00:06:57.198356 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 17 00:06:57.221876 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 17 00:06:57.230264 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 17 00:06:57.248455 locksmithd[1500]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:06:57.274466 coreos-metadata[1535]: May 17 00:06:57.274 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 May 17 00:06:57.275951 coreos-metadata[1535]: May 17 00:06:57.275 INFO Fetch successful May 17 00:06:57.278752 unknown[1535]: wrote ssh authorized keys file for user: core May 17 00:06:57.312630 update-ssh-keys[1540]: Updated "/home/core/.ssh/authorized_keys" May 17 00:06:57.313063 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 17 00:06:57.318032 systemd[1]: Finished sshkeys.service. May 17 00:06:57.323523 containerd[1482]: time="2025-05-17T00:06:57.323429280Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 00:06:57.397720 containerd[1482]: time="2025-05-17T00:06:57.397393200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:06:57.400712 containerd[1482]: time="2025-05-17T00:06:57.400666080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:06:57.401825 containerd[1482]: time="2025-05-17T00:06:57.401799960Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:06:57.402801 containerd[1482]: time="2025-05-17T00:06:57.401874480Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:06:57.402801 containerd[1482]: time="2025-05-17T00:06:57.402050760Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 00:06:57.402801 containerd[1482]: time="2025-05-17T00:06:57.402074960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 00:06:57.402801 containerd[1482]: time="2025-05-17T00:06:57.402138560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:06:57.402801 containerd[1482]: time="2025-05-17T00:06:57.402152240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:06:57.402801 containerd[1482]: time="2025-05-17T00:06:57.402358920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:06:57.402801 containerd[1482]: time="2025-05-17T00:06:57.402377080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:06:57.402801 containerd[1482]: time="2025-05-17T00:06:57.402401440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:06:57.402801 containerd[1482]: time="2025-05-17T00:06:57.402414600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:06:57.402801 containerd[1482]: time="2025-05-17T00:06:57.402494560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:06:57.402801 containerd[1482]: time="2025-05-17T00:06:57.402744760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:06:57.405997 containerd[1482]: time="2025-05-17T00:06:57.405953640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:06:57.406080 containerd[1482]: time="2025-05-17T00:06:57.406066560Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:06:57.406243 containerd[1482]: time="2025-05-17T00:06:57.406226080Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:06:57.406905 containerd[1482]: time="2025-05-17T00:06:57.406882320Z" level=info msg="metadata content store policy set" policy=shared May 17 00:06:57.410990 containerd[1482]: time="2025-05-17T00:06:57.410945120Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:06:57.411852 containerd[1482]: time="2025-05-17T00:06:57.411831600Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:06:57.411988 containerd[1482]: time="2025-05-17T00:06:57.411974080Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 00:06:57.412064 containerd[1482]: time="2025-05-17T00:06:57.412051880Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 00:06:57.412139 containerd[1482]: time="2025-05-17T00:06:57.412119800Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:06:57.412383 containerd[1482]: time="2025-05-17T00:06:57.412364840Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:06:57.413058 containerd[1482]: time="2025-05-17T00:06:57.413028400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:06:57.413219 containerd[1482]: time="2025-05-17T00:06:57.413197640Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 00:06:57.413248 containerd[1482]: time="2025-05-17T00:06:57.413224080Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 00:06:57.413248 containerd[1482]: time="2025-05-17T00:06:57.413239240Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 00:06:57.413283 containerd[1482]: time="2025-05-17T00:06:57.413268080Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:06:57.413302 containerd[1482]: time="2025-05-17T00:06:57.413281960Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:06:57.413320 containerd[1482]: time="2025-05-17T00:06:57.413299840Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:06:57.413320 containerd[1482]: time="2025-05-17T00:06:57.413314640Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:06:57.413413 containerd[1482]: time="2025-05-17T00:06:57.413334440Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:06:57.413413 containerd[1482]: time="2025-05-17T00:06:57.413348920Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:06:57.413413 containerd[1482]: time="2025-05-17T00:06:57.413364840Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:06:57.413413 containerd[1482]: time="2025-05-17T00:06:57.413377680Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:06:57.413413 containerd[1482]: time="2025-05-17T00:06:57.413398160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:06:57.413413 containerd[1482]: time="2025-05-17T00:06:57.413412280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:06:57.413516 containerd[1482]: time="2025-05-17T00:06:57.413425560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:06:57.413516 containerd[1482]: time="2025-05-17T00:06:57.413451200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:06:57.413516 containerd[1482]: time="2025-05-17T00:06:57.413464640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:06:57.413516 containerd[1482]: time="2025-05-17T00:06:57.413477560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:06:57.413516 containerd[1482]: time="2025-05-17T00:06:57.413489160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:06:57.413516 containerd[1482]: time="2025-05-17T00:06:57.413502520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:06:57.413516 containerd[1482]: time="2025-05-17T00:06:57.413515880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 00:06:57.413654 containerd[1482]: time="2025-05-17T00:06:57.413542760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 00:06:57.413654 containerd[1482]: time="2025-05-17T00:06:57.413557000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:06:57.413654 containerd[1482]: time="2025-05-17T00:06:57.413569160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 00:06:57.413654 containerd[1482]: time="2025-05-17T00:06:57.413580760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:06:57.413654 containerd[1482]: time="2025-05-17T00:06:57.413596000Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 00:06:57.413654 containerd[1482]: time="2025-05-17T00:06:57.413617040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 00:06:57.413654 containerd[1482]: time="2025-05-17T00:06:57.413629040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:06:57.413654 containerd[1482]: time="2025-05-17T00:06:57.413644520Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:06:57.414796 containerd[1482]: time="2025-05-17T00:06:57.413980120Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:06:57.414796 containerd[1482]: time="2025-05-17T00:06:57.414006880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 00:06:57.414796 containerd[1482]: time="2025-05-17T00:06:57.414018480Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:06:57.414796 containerd[1482]: time="2025-05-17T00:06:57.414032680Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 00:06:57.414796 containerd[1482]: time="2025-05-17T00:06:57.414043200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:06:57.414796 containerd[1482]: time="2025-05-17T00:06:57.414055160Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 00:06:57.414796 containerd[1482]: time="2025-05-17T00:06:57.414065680Z" level=info msg="NRI interface is disabled by configuration." May 17 00:06:57.414796 containerd[1482]: time="2025-05-17T00:06:57.414085680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:06:57.414963 containerd[1482]: time="2025-05-17T00:06:57.414574440Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:06:57.414963 containerd[1482]: time="2025-05-17T00:06:57.414638400Z" level=info msg="Connect containerd service" May 17 00:06:57.414963 containerd[1482]: time="2025-05-17T00:06:57.414668960Z" level=info msg="using legacy CRI server" May 17 00:06:57.414963 containerd[1482]: time="2025-05-17T00:06:57.414675920Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 00:06:57.418396 containerd[1482]: time="2025-05-17T00:06:57.418346040Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:06:57.423777 containerd[1482]: time="2025-05-17T00:06:57.423718480Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:06:57.424290 containerd[1482]: time="2025-05-17T00:06:57.424266240Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:06:57.424332 containerd[1482]: time="2025-05-17T00:06:57.424312960Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:06:57.424442 containerd[1482]: time="2025-05-17T00:06:57.424416360Z" level=info msg="Start subscribing containerd event" May 17 00:06:57.424470 containerd[1482]: time="2025-05-17T00:06:57.424464320Z" level=info msg="Start recovering state" May 17 00:06:57.424634 containerd[1482]: time="2025-05-17T00:06:57.424610200Z" level=info msg="Start event monitor" May 17 00:06:57.424661 containerd[1482]: time="2025-05-17T00:06:57.424635440Z" level=info msg="Start snapshots syncer" May 17 00:06:57.424661 containerd[1482]: time="2025-05-17T00:06:57.424651760Z" level=info msg="Start cni network conf syncer for default" May 17 00:06:57.424750 containerd[1482]: time="2025-05-17T00:06:57.424660400Z" level=info msg="Start streaming server" May 17 00:06:57.427047 systemd[1]: Started containerd.service - containerd container runtime. May 17 00:06:57.428301 containerd[1482]: time="2025-05-17T00:06:57.427959560Z" level=info msg="containerd successfully booted in 0.111157s" May 17 00:06:57.618719 tar[1471]: linux-arm64/LICENSE May 17 00:06:57.618719 tar[1471]: linux-arm64/README.md May 17 00:06:57.631981 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 00:06:57.856207 sshd_keygen[1493]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:06:57.881923 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 00:06:57.894350 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 00:06:57.903673 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:06:57.904225 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 00:06:57.914170 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 00:06:57.934997 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 00:06:57.944186 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 00:06:57.950257 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 17 00:06:57.953132 systemd[1]: Reached target getty.target - Login Prompts. May 17 00:06:58.150066 systemd-networkd[1384]: eth0: Gained IPv6LL May 17 00:06:58.152157 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. May 17 00:06:58.154695 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 00:06:58.156144 systemd[1]: Reached target network-online.target - Network is Online. May 17 00:06:58.168191 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:06:58.171703 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 00:06:58.210524 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 00:06:58.342703 systemd-networkd[1384]: eth1: Gained IPv6LL May 17 00:06:58.343872 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. May 17 00:06:58.998991 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:06:58.999364 (kubelet)[1582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:06:59.000606 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 00:06:59.007749 systemd[1]: Startup finished in 755ms (kernel) + 12.376s (initrd) + 4.972s (userspace) = 18.104s. May 17 00:06:59.541502 kubelet[1582]: E0517 00:06:59.541451 1582 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:06:59.544114 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:06:59.544443 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:07:09.794997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:07:09.805162 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:07:09.918507 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:07:09.935836 (kubelet)[1602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:07:09.989892 kubelet[1602]: E0517 00:07:09.989828 1602 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:07:09.995204 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:07:09.995631 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:07:20.112079 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:07:20.117264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:07:20.254071 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:07:20.255994 (kubelet)[1617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:07:20.315239 kubelet[1617]: E0517 00:07:20.315149 1617 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:07:20.318690 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:07:20.318955 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:07:28.646262 systemd-timesyncd[1352]: Contacted time server 188.245.32.133:123 (2.flatcar.pool.ntp.org). May 17 00:07:28.646361 systemd-timesyncd[1352]: Initial clock synchronization to Sat 2025-05-17 00:07:28.268867 UTC. May 17 00:07:30.361955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 17 00:07:30.370074 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:07:30.486311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:07:30.495159 (kubelet)[1631]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:07:30.538452 kubelet[1631]: E0517 00:07:30.538319 1631 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:07:30.540329 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:07:30.540572 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:07:40.613824 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 17 00:07:40.623166 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:07:40.743322 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:07:40.752146 (kubelet)[1647]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:07:40.795158 kubelet[1647]: E0517 00:07:40.795072 1647 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:07:40.799002 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:07:40.799259 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:07:41.946574 update_engine[1462]: I20250517 00:07:41.946364 1462 update_attempter.cc:509] Updating boot flags... May 17 00:07:42.000781 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 43 scanned by (udev-worker) (1663) May 17 00:07:42.035911 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 43 scanned by (udev-worker) (1665) May 17 00:07:42.079810 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 43 scanned by (udev-worker) (1665) May 17 00:07:50.861804 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 17 00:07:50.870247 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:07:51.000985 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:07:51.016448 (kubelet)[1683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:07:51.067879 kubelet[1683]: E0517 00:07:51.067808 1683 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:07:51.071568 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:07:51.071803 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:08:01.112230 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 17 00:08:01.120136 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:08:01.277163 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:08:01.278336 (kubelet)[1698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:08:01.325944 kubelet[1698]: E0517 00:08:01.325863 1698 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:08:01.331213 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:08:01.331883 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:08:11.361836 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. May 17 00:08:11.369159 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:08:11.490473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:08:11.504444 (kubelet)[1713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:08:11.552903 kubelet[1713]: E0517 00:08:11.552758 1713 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:08:11.555708 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:08:11.555893 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:08:21.611986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. May 17 00:08:21.619134 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:08:21.769120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:08:21.773683 (kubelet)[1728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:08:21.814302 kubelet[1728]: E0517 00:08:21.814209 1728 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:08:21.817337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:08:21.817545 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:08:31.214176 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 00:08:31.226338 systemd[1]: Started sshd@0-188.245.126.139:22-139.178.68.195:37284.service - OpenSSH per-connection server daemon (139.178.68.195:37284). May 17 00:08:31.861654 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. May 17 00:08:31.871185 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:08:32.043152 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:08:32.045304 (kubelet)[1746]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:08:32.083570 kubelet[1746]: E0517 00:08:32.083507 1746 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:08:32.088338 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:08:32.088627 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:08:32.219740 sshd[1736]: Accepted publickey for core from 139.178.68.195 port 37284 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:08:32.221853 sshd[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:08:32.232467 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 00:08:32.241383 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 00:08:32.245842 systemd-logind[1461]: New session 1 of user core. May 17 00:08:32.257864 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 00:08:32.268566 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 00:08:32.275081 (systemd)[1755]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:08:32.391939 systemd[1755]: Queued start job for default target default.target. May 17 00:08:32.403894 systemd[1755]: Created slice app.slice - User Application Slice. May 17 00:08:32.403956 systemd[1755]: Reached target paths.target - Paths. May 17 00:08:32.403982 systemd[1755]: Reached target timers.target - Timers. May 17 00:08:32.406148 systemd[1755]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 00:08:32.423069 systemd[1755]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 00:08:32.423254 systemd[1755]: Reached target sockets.target - Sockets. May 17 00:08:32.423274 systemd[1755]: Reached target basic.target - Basic System. May 17 00:08:32.423335 systemd[1755]: Reached target default.target - Main User Target. May 17 00:08:32.423374 systemd[1755]: Startup finished in 139ms. May 17 00:08:32.423443 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 00:08:32.432067 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 00:08:33.146364 systemd[1]: Started sshd@1-188.245.126.139:22-139.178.68.195:37290.service - OpenSSH per-connection server daemon (139.178.68.195:37290). May 17 00:08:34.142639 sshd[1766]: Accepted publickey for core from 139.178.68.195 port 37290 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:08:34.144526 sshd[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:08:34.150200 systemd-logind[1461]: New session 2 of user core. May 17 00:08:34.158163 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 00:08:34.835413 sshd[1766]: pam_unix(sshd:session): session closed for user core May 17 00:08:34.841162 systemd-logind[1461]: Session 2 logged out. Waiting for processes to exit. May 17 00:08:34.842066 systemd[1]: sshd@1-188.245.126.139:22-139.178.68.195:37290.service: Deactivated successfully. May 17 00:08:34.844558 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:08:34.849051 systemd-logind[1461]: Removed session 2. May 17 00:08:35.016529 systemd[1]: Started sshd@2-188.245.126.139:22-139.178.68.195:56682.service - OpenSSH per-connection server daemon (139.178.68.195:56682). May 17 00:08:35.140087 systemd[1]: Started sshd@3-188.245.126.139:22-104.234.115.41:21447.service - OpenSSH per-connection server daemon (104.234.115.41:21447). May 17 00:08:35.258716 sshd[1776]: Connection closed by 104.234.115.41 port 21447 May 17 00:08:35.259982 systemd[1]: sshd@3-188.245.126.139:22-104.234.115.41:21447.service: Deactivated successfully. May 17 00:08:36.011561 sshd[1773]: Accepted publickey for core from 139.178.68.195 port 56682 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:08:36.013459 sshd[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:08:36.020594 systemd-logind[1461]: New session 3 of user core. May 17 00:08:36.026285 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 00:08:36.708026 sshd[1773]: pam_unix(sshd:session): session closed for user core May 17 00:08:36.713057 systemd[1]: sshd@2-188.245.126.139:22-139.178.68.195:56682.service: Deactivated successfully. May 17 00:08:36.716526 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:08:36.717453 systemd-logind[1461]: Session 3 logged out. Waiting for processes to exit. May 17 00:08:36.719963 systemd-logind[1461]: Removed session 3. May 17 00:08:36.891251 systemd[1]: Started sshd@4-188.245.126.139:22-139.178.68.195:56694.service - OpenSSH per-connection server daemon (139.178.68.195:56694). May 17 00:08:37.879731 sshd[1784]: Accepted publickey for core from 139.178.68.195 port 56694 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:08:37.881817 sshd[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:08:37.886794 systemd-logind[1461]: New session 4 of user core. May 17 00:08:37.894782 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 00:08:38.572820 sshd[1784]: pam_unix(sshd:session): session closed for user core May 17 00:08:38.576542 systemd[1]: sshd@4-188.245.126.139:22-139.178.68.195:56694.service: Deactivated successfully. May 17 00:08:38.578848 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:08:38.581578 systemd-logind[1461]: Session 4 logged out. Waiting for processes to exit. May 17 00:08:38.583078 systemd-logind[1461]: Removed session 4. May 17 00:08:38.747182 systemd[1]: Started sshd@5-188.245.126.139:22-139.178.68.195:56706.service - OpenSSH per-connection server daemon (139.178.68.195:56706). May 17 00:08:39.721246 sshd[1791]: Accepted publickey for core from 139.178.68.195 port 56706 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:08:39.721945 sshd[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:08:39.727784 systemd-logind[1461]: New session 5 of user core. May 17 00:08:39.737100 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 00:08:40.252726 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:08:40.253209 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:08:40.269199 sudo[1794]: pam_unix(sudo:session): session closed for user root May 17 00:08:40.429309 sshd[1791]: pam_unix(sshd:session): session closed for user core May 17 00:08:40.435258 systemd[1]: sshd@5-188.245.126.139:22-139.178.68.195:56706.service: Deactivated successfully. May 17 00:08:40.437366 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:08:40.438140 systemd-logind[1461]: Session 5 logged out. Waiting for processes to exit. May 17 00:08:40.439645 systemd-logind[1461]: Removed session 5. May 17 00:08:40.613345 systemd[1]: Started sshd@6-188.245.126.139:22-139.178.68.195:56720.service - OpenSSH per-connection server daemon (139.178.68.195:56720). May 17 00:08:41.602509 sshd[1799]: Accepted publickey for core from 139.178.68.195 port 56720 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:08:41.605005 sshd[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:08:41.614430 systemd-logind[1461]: New session 6 of user core. May 17 00:08:41.620116 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 00:08:42.111736 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. May 17 00:08:42.119171 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:08:42.132405 sudo[1806]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:08:42.133174 sudo[1806]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:08:42.139528 sudo[1806]: pam_unix(sudo:session): session closed for user root May 17 00:08:42.145465 sudo[1805]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:08:42.146123 sudo[1805]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:08:42.163252 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 00:08:42.167309 auditctl[1809]: No rules May 17 00:08:42.169094 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:08:42.169396 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 00:08:42.177841 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:08:42.226791 augenrules[1827]: No rules May 17 00:08:42.239029 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:08:42.242409 sudo[1805]: pam_unix(sudo:session): session closed for user root May 17 00:08:42.251821 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:08:42.264103 (kubelet)[1837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:08:42.303685 kubelet[1837]: E0517 00:08:42.303612 1837 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:08:42.306982 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:08:42.307316 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:08:42.406728 sshd[1799]: pam_unix(sshd:session): session closed for user core May 17 00:08:42.412627 systemd-logind[1461]: Session 6 logged out. Waiting for processes to exit. May 17 00:08:42.413627 systemd[1]: sshd@6-188.245.126.139:22-139.178.68.195:56720.service: Deactivated successfully. May 17 00:08:42.415598 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:08:42.416685 systemd-logind[1461]: Removed session 6. May 17 00:08:42.595260 systemd[1]: Started sshd@7-188.245.126.139:22-139.178.68.195:56728.service - OpenSSH per-connection server daemon (139.178.68.195:56728). May 17 00:08:43.588680 sshd[1847]: Accepted publickey for core from 139.178.68.195 port 56728 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:08:43.591595 sshd[1847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:08:43.598159 systemd-logind[1461]: New session 7 of user core. May 17 00:08:43.605132 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 00:08:44.119924 sudo[1850]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:08:44.120207 sudo[1850]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:08:44.420322 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 00:08:44.420542 (dockerd)[1865]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 00:08:44.674070 dockerd[1865]: time="2025-05-17T00:08:44.673936947Z" level=info msg="Starting up" May 17 00:08:44.769024 dockerd[1865]: time="2025-05-17T00:08:44.768926714Z" level=info msg="Loading containers: start." May 17 00:08:44.874810 kernel: Initializing XFRM netlink socket May 17 00:08:44.960211 systemd-networkd[1384]: docker0: Link UP May 17 00:08:44.977669 dockerd[1865]: time="2025-05-17T00:08:44.977530942Z" level=info msg="Loading containers: done." May 17 00:08:44.996700 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3167354240-merged.mount: Deactivated successfully. May 17 00:08:44.999707 dockerd[1865]: time="2025-05-17T00:08:44.999634804Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:08:44.999859 dockerd[1865]: time="2025-05-17T00:08:44.999816253Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 00:08:44.999977 dockerd[1865]: time="2025-05-17T00:08:44.999939459Z" level=info msg="Daemon has completed initialization" May 17 00:08:45.038317 dockerd[1865]: time="2025-05-17T00:08:45.038164259Z" level=info msg="API listen on /run/docker.sock" May 17 00:08:45.038673 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 00:08:46.056373 containerd[1482]: time="2025-05-17T00:08:46.056327483Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 17 00:08:46.727404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3849191349.mount: Deactivated successfully. May 17 00:08:47.505689 containerd[1482]: time="2025-05-17T00:08:47.505631462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:47.507250 containerd[1482]: time="2025-05-17T00:08:47.507207374Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=25652066" May 17 00:08:47.508015 containerd[1482]: time="2025-05-17T00:08:47.507594271Z" level=info msg="ImageCreate event name:\"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:47.512251 containerd[1482]: time="2025-05-17T00:08:47.512195719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:47.512990 containerd[1482]: time="2025-05-17T00:08:47.512941513Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"25648774\" in 1.456559348s" May 17 00:08:47.512990 containerd[1482]: time="2025-05-17T00:08:47.512983675Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\"" May 17 00:08:47.515503 containerd[1482]: time="2025-05-17T00:08:47.515397584Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 17 00:08:48.756889 containerd[1482]: time="2025-05-17T00:08:48.756837985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:48.758239 containerd[1482]: time="2025-05-17T00:08:48.758132163Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=22459548" May 17 00:08:48.760788 containerd[1482]: time="2025-05-17T00:08:48.759141687Z" level=info msg="ImageCreate event name:\"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:48.763067 containerd[1482]: time="2025-05-17T00:08:48.763020980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:48.764330 containerd[1482]: time="2025-05-17T00:08:48.764294836Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"23995294\" in 1.248649481s" May 17 00:08:48.764429 containerd[1482]: time="2025-05-17T00:08:48.764412362Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\"" May 17 00:08:48.764963 containerd[1482]: time="2025-05-17T00:08:48.764914424Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 17 00:08:49.406334 systemd[1]: Started sshd@8-188.245.126.139:22-104.234.115.41:41744.service - OpenSSH per-connection server daemon (104.234.115.41:41744). May 17 00:08:50.226757 containerd[1482]: time="2025-05-17T00:08:50.226690105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:50.228785 containerd[1482]: time="2025-05-17T00:08:50.228729232Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=17125299" May 17 00:08:50.229213 containerd[1482]: time="2025-05-17T00:08:50.229157091Z" level=info msg="ImageCreate event name:\"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:50.232696 containerd[1482]: time="2025-05-17T00:08:50.232649121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:50.234615 containerd[1482]: time="2025-05-17T00:08:50.234095863Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"18661063\" in 1.469050793s" May 17 00:08:50.234615 containerd[1482]: time="2025-05-17T00:08:50.234137265Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\"" May 17 00:08:50.235017 containerd[1482]: time="2025-05-17T00:08:50.234981021Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 00:08:51.224724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3334303545.mount: Deactivated successfully. May 17 00:08:51.922952 containerd[1482]: time="2025-05-17T00:08:51.921986269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:51.922952 containerd[1482]: time="2025-05-17T00:08:51.922902588Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=26871401" May 17 00:08:51.924154 containerd[1482]: time="2025-05-17T00:08:51.924088718Z" level=info msg="ImageCreate event name:\"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:51.929500 containerd[1482]: time="2025-05-17T00:08:51.929450385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:51.930874 containerd[1482]: time="2025-05-17T00:08:51.930834123Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"26870394\" in 1.695744898s" May 17 00:08:51.930874 containerd[1482]: time="2025-05-17T00:08:51.930874845Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\"" May 17 00:08:51.931583 containerd[1482]: time="2025-05-17T00:08:51.931556714Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:08:52.361879 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. May 17 00:08:52.372428 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:08:52.517970 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:08:52.536536 (kubelet)[2084]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:08:52.545673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2121445676.mount: Deactivated successfully. May 17 00:08:52.599781 kubelet[2084]: E0517 00:08:52.599718 2084 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:08:52.602351 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:08:52.602544 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:08:53.285499 containerd[1482]: time="2025-05-17T00:08:53.285339775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:53.287461 containerd[1482]: time="2025-05-17T00:08:53.286893558Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" May 17 00:08:53.289542 containerd[1482]: time="2025-05-17T00:08:53.288577667Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:53.293134 containerd[1482]: time="2025-05-17T00:08:53.293065892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:53.295065 containerd[1482]: time="2025-05-17T00:08:53.294941969Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.363355934s" May 17 00:08:53.295065 containerd[1482]: time="2025-05-17T00:08:53.294997291Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 17 00:08:53.295742 containerd[1482]: time="2025-05-17T00:08:53.295580435Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:08:53.756756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2426102985.mount: Deactivated successfully. May 17 00:08:53.768348 containerd[1482]: time="2025-05-17T00:08:53.768176223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:53.770395 containerd[1482]: time="2025-05-17T00:08:53.770282989Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" May 17 00:08:53.771556 containerd[1482]: time="2025-05-17T00:08:53.771497479Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:53.777307 containerd[1482]: time="2025-05-17T00:08:53.777232234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:53.778566 containerd[1482]: time="2025-05-17T00:08:53.778014026Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 482.39347ms" May 17 00:08:53.778566 containerd[1482]: time="2025-05-17T00:08:53.778058588Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 17 00:08:53.779260 containerd[1482]: time="2025-05-17T00:08:53.778979546Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 17 00:08:54.414132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount202614873.mount: Deactivated successfully. May 17 00:08:57.433487 containerd[1482]: time="2025-05-17T00:08:57.432230095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:57.433487 containerd[1482]: time="2025-05-17T00:08:57.433447863Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406533" May 17 00:08:57.434107 containerd[1482]: time="2025-05-17T00:08:57.434079127Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:57.437420 containerd[1482]: time="2025-05-17T00:08:57.437383536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:57.438692 containerd[1482]: time="2025-05-17T00:08:57.438646625Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.659630358s" May 17 00:08:57.438692 containerd[1482]: time="2025-05-17T00:08:57.438689667Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 17 00:09:02.612155 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. May 17 00:09:02.624989 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:09:02.749134 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:09:02.755129 (kubelet)[2229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:09:02.798790 kubelet[2229]: E0517 00:09:02.798395 2229 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:09:02.803309 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:09:02.803627 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:09:02.836406 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:09:02.847142 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:09:02.883511 systemd[1]: Reloading requested from client PID 2243 ('systemctl') (unit session-7.scope)... May 17 00:09:02.883527 systemd[1]: Reloading... May 17 00:09:03.009797 zram_generator::config[2283]: No configuration found. May 17 00:09:03.119576 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:09:03.191274 systemd[1]: Reloading finished in 307 ms. May 17 00:09:03.254393 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 00:09:03.254504 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 00:09:03.254980 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:09:03.263348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:09:03.416001 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:09:03.428410 (kubelet)[2333]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:09:03.471259 kubelet[2333]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:09:03.471600 kubelet[2333]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:09:03.471650 kubelet[2333]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:09:03.471808 kubelet[2333]: I0517 00:09:03.471754 2333 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:09:04.048981 sshd[2062]: Connection closed by 104.234.115.41 port 41744 [preauth] May 17 00:09:04.055014 systemd[1]: sshd@8-188.245.126.139:22-104.234.115.41:41744.service: Deactivated successfully. May 17 00:09:04.383410 kubelet[2333]: I0517 00:09:04.383346 2333 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:09:04.383410 kubelet[2333]: I0517 00:09:04.383389 2333 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:09:04.383721 kubelet[2333]: I0517 00:09:04.383690 2333 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:09:04.409339 kubelet[2333]: E0517 00:09:04.409179 2333 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://188.245.126.139:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 188.245.126.139:6443: connect: connection refused" logger="UnhandledError" May 17 00:09:04.411678 kubelet[2333]: I0517 00:09:04.411633 2333 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:09:04.423036 kubelet[2333]: E0517 00:09:04.422978 2333 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:09:04.423036 kubelet[2333]: I0517 00:09:04.423028 2333 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:09:04.428091 kubelet[2333]: I0517 00:09:04.428025 2333 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:09:04.428412 kubelet[2333]: I0517 00:09:04.428362 2333 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:09:04.428614 kubelet[2333]: I0517 00:09:04.428562 2333 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:09:04.428858 kubelet[2333]: I0517 00:09:04.428602 2333 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-e61ddff57a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:09:04.428984 kubelet[2333]: I0517 00:09:04.428904 2333 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:09:04.428984 kubelet[2333]: I0517 00:09:04.428917 2333 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:09:04.429255 kubelet[2333]: I0517 00:09:04.429220 2333 state_mem.go:36] "Initialized new in-memory state store" May 17 00:09:04.434310 kubelet[2333]: I0517 00:09:04.434247 2333 kubelet.go:408] "Attempting to sync node with API server" May 17 00:09:04.434310 kubelet[2333]: I0517 00:09:04.434302 2333 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:09:04.434412 kubelet[2333]: I0517 00:09:04.434340 2333 kubelet.go:314] "Adding apiserver pod source" May 17 00:09:04.434443 kubelet[2333]: I0517 00:09:04.434422 2333 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:09:04.439228 kubelet[2333]: W0517 00:09:04.438612 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.245.126.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-e61ddff57a&limit=500&resourceVersion=0": dial tcp 188.245.126.139:6443: connect: connection refused May 17 00:09:04.439228 kubelet[2333]: E0517 00:09:04.438675 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://188.245.126.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-e61ddff57a&limit=500&resourceVersion=0\": dial tcp 188.245.126.139:6443: connect: connection refused" logger="UnhandledError" May 17 00:09:04.440161 kubelet[2333]: W0517 00:09:04.440111 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://188.245.126.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 188.245.126.139:6443: connect: connection refused May 17 00:09:04.440303 kubelet[2333]: E0517 00:09:04.440283 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://188.245.126.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 188.245.126.139:6443: connect: connection refused" logger="UnhandledError" May 17 00:09:04.440709 kubelet[2333]: I0517 00:09:04.440684 2333 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:09:04.441578 kubelet[2333]: I0517 00:09:04.441550 2333 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:09:04.441909 kubelet[2333]: W0517 00:09:04.441895 2333 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:09:04.444836 kubelet[2333]: I0517 00:09:04.444617 2333 server.go:1274] "Started kubelet" May 17 00:09:04.445138 kubelet[2333]: I0517 00:09:04.445093 2333 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:09:04.446348 kubelet[2333]: I0517 00:09:04.446128 2333 server.go:449] "Adding debug handlers to kubelet server" May 17 00:09:04.448731 kubelet[2333]: I0517 00:09:04.448639 2333 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:09:04.449259 kubelet[2333]: I0517 00:09:04.449146 2333 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:09:04.451095 kubelet[2333]: I0517 00:09:04.449725 2333 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:09:04.451095 kubelet[2333]: I0517 00:09:04.449820 2333 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:09:04.451095 kubelet[2333]: I0517 00:09:04.450398 2333 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:09:04.455078 kubelet[2333]: I0517 00:09:04.455041 2333 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:09:04.455208 kubelet[2333]: I0517 00:09:04.455130 2333 reconciler.go:26] "Reconciler: start to sync state" May 17 00:09:04.458877 kubelet[2333]: E0517 00:09:04.457952 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-e61ddff57a\" not found" May 17 00:09:04.459613 kubelet[2333]: W0517 00:09:04.459544 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://188.245.126.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.126.139:6443: connect: connection refused May 17 00:09:04.459690 kubelet[2333]: E0517 00:09:04.459616 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://188.245.126.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 188.245.126.139:6443: connect: connection refused" logger="UnhandledError" May 17 00:09:04.462577 kubelet[2333]: I0517 00:09:04.462535 2333 factory.go:221] Registration of the systemd container factory successfully May 17 00:09:04.462679 kubelet[2333]: I0517 00:09:04.462661 2333 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:09:04.463917 kubelet[2333]: E0517 00:09:04.459818 2333 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://188.245.126.139:6443/api/v1/namespaces/default/events\": dial tcp 188.245.126.139:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-n-e61ddff57a.184027eab5bbfe42 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-n-e61ddff57a,UID:ci-4081-3-3-n-e61ddff57a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-e61ddff57a,},FirstTimestamp:2025-05-17 00:09:04.444579394 +0000 UTC m=+1.009501709,LastTimestamp:2025-05-17 00:09:04.444579394 +0000 UTC m=+1.009501709,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-e61ddff57a,}" May 17 00:09:04.464971 kubelet[2333]: I0517 00:09:04.464941 2333 factory.go:221] Registration of the containerd container factory successfully May 17 00:09:04.465942 kubelet[2333]: E0517 00:09:04.465661 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.126.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-e61ddff57a?timeout=10s\": dial tcp 188.245.126.139:6443: connect: connection refused" interval="200ms" May 17 00:09:04.477533 kubelet[2333]: I0517 00:09:04.477446 2333 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:09:04.479478 kubelet[2333]: I0517 00:09:04.478906 2333 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:09:04.479478 kubelet[2333]: I0517 00:09:04.478959 2333 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:09:04.479478 kubelet[2333]: I0517 00:09:04.478995 2333 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:09:04.479478 kubelet[2333]: E0517 00:09:04.479080 2333 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:09:04.489118 kubelet[2333]: W0517 00:09:04.489031 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://188.245.126.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.126.139:6443: connect: connection refused May 17 00:09:04.489118 kubelet[2333]: E0517 00:09:04.489109 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://188.245.126.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 188.245.126.139:6443: connect: connection refused" logger="UnhandledError" May 17 00:09:04.496221 kubelet[2333]: I0517 00:09:04.496134 2333 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:09:04.496421 kubelet[2333]: I0517 00:09:04.496401 2333 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:09:04.496537 kubelet[2333]: I0517 00:09:04.496523 2333 state_mem.go:36] "Initialized new in-memory state store" May 17 00:09:04.499236 kubelet[2333]: I0517 00:09:04.499203 2333 policy_none.go:49] "None policy: Start" May 17 00:09:04.500502 kubelet[2333]: I0517 00:09:04.500476 2333 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:09:04.500641 kubelet[2333]: I0517 00:09:04.500536 2333 state_mem.go:35] "Initializing new in-memory state store" May 17 00:09:04.508011 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 17 00:09:04.522665 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 17 00:09:04.526259 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 17 00:09:04.537406 kubelet[2333]: I0517 00:09:04.537340 2333 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:09:04.537924 kubelet[2333]: I0517 00:09:04.537746 2333 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:09:04.537924 kubelet[2333]: I0517 00:09:04.537800 2333 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:09:04.540341 kubelet[2333]: I0517 00:09:04.539914 2333 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:09:04.543988 kubelet[2333]: E0517 00:09:04.543948 2333 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-3-n-e61ddff57a\" not found" May 17 00:09:04.592271 systemd[1]: Created slice kubepods-burstable-pod82638968ed4b293c6103958541c34e92.slice - libcontainer container kubepods-burstable-pod82638968ed4b293c6103958541c34e92.slice. May 17 00:09:04.606884 systemd[1]: Created slice kubepods-burstable-pod17bbdbff10760d89bba8c55fa3866c4f.slice - libcontainer container kubepods-burstable-pod17bbdbff10760d89bba8c55fa3866c4f.slice. May 17 00:09:04.619519 systemd[1]: Created slice kubepods-burstable-pod1784941641dbc07410c4436487bccf30.slice - libcontainer container kubepods-burstable-pod1784941641dbc07410c4436487bccf30.slice. May 17 00:09:04.640734 kubelet[2333]: I0517 00:09:04.640471 2333 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-e61ddff57a" May 17 00:09:04.641430 kubelet[2333]: E0517 00:09:04.641371 2333 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://188.245.126.139:6443/api/v1/nodes\": dial tcp 188.245.126.139:6443: connect: connection refused" node="ci-4081-3-3-n-e61ddff57a" May 17 00:09:04.656128 kubelet[2333]: I0517 00:09:04.656080 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/82638968ed4b293c6103958541c34e92-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-e61ddff57a\" (UID: \"82638968ed4b293c6103958541c34e92\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-e61ddff57a" May 17 00:09:04.667005 kubelet[2333]: E0517 00:09:04.666862 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.126.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-e61ddff57a?timeout=10s\": dial tcp 188.245.126.139:6443: connect: connection refused" interval="400ms" May 17 00:09:04.756712 kubelet[2333]: I0517 00:09:04.756643 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/82638968ed4b293c6103958541c34e92-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-e61ddff57a\" (UID: \"82638968ed4b293c6103958541c34e92\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-e61ddff57a" May 17 00:09:04.756712 kubelet[2333]: I0517 00:09:04.756705 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/82638968ed4b293c6103958541c34e92-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-e61ddff57a\" (UID: \"82638968ed4b293c6103958541c34e92\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-e61ddff57a" May 17 00:09:04.756986 kubelet[2333]: I0517 00:09:04.756738 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/17bbdbff10760d89bba8c55fa3866c4f-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-e61ddff57a\" (UID: \"17bbdbff10760d89bba8c55fa3866c4f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-e61ddff57a" May 17 00:09:04.756986 kubelet[2333]: I0517 00:09:04.756800 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/17bbdbff10760d89bba8c55fa3866c4f-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-e61ddff57a\" (UID: \"17bbdbff10760d89bba8c55fa3866c4f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-e61ddff57a" May 17 00:09:04.756986 kubelet[2333]: I0517 00:09:04.756831 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/17bbdbff10760d89bba8c55fa3866c4f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-e61ddff57a\" (UID: \"17bbdbff10760d89bba8c55fa3866c4f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-e61ddff57a" May 17 00:09:04.756986 kubelet[2333]: I0517 00:09:04.756859 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/17bbdbff10760d89bba8c55fa3866c4f-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-e61ddff57a\" (UID: \"17bbdbff10760d89bba8c55fa3866c4f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-e61ddff57a" May 17 00:09:04.756986 kubelet[2333]: I0517 00:09:04.756887 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/17bbdbff10760d89bba8c55fa3866c4f-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-e61ddff57a\" (UID: \"17bbdbff10760d89bba8c55fa3866c4f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-e61ddff57a" May 17 00:09:04.757153 kubelet[2333]: I0517 00:09:04.756915 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1784941641dbc07410c4436487bccf30-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-e61ddff57a\" (UID: \"1784941641dbc07410c4436487bccf30\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-e61ddff57a" May 17 00:09:04.844117 kubelet[2333]: I0517 00:09:04.844058 2333 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-e61ddff57a" May 17 00:09:04.844713 kubelet[2333]: E0517 00:09:04.844635 2333 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://188.245.126.139:6443/api/v1/nodes\": dial tcp 188.245.126.139:6443: connect: connection refused" node="ci-4081-3-3-n-e61ddff57a" May 17 00:09:04.903555 containerd[1482]: time="2025-05-17T00:09:04.903068078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-e61ddff57a,Uid:82638968ed4b293c6103958541c34e92,Namespace:kube-system,Attempt:0,}" May 17 00:09:04.918584 containerd[1482]: time="2025-05-17T00:09:04.918098306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-e61ddff57a,Uid:17bbdbff10760d89bba8c55fa3866c4f,Namespace:kube-system,Attempt:0,}" May 17 00:09:04.924883 containerd[1482]: time="2025-05-17T00:09:04.924819511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-e61ddff57a,Uid:1784941641dbc07410c4436487bccf30,Namespace:kube-system,Attempt:0,}" May 17 00:09:05.068386 kubelet[2333]: E0517 00:09:05.068332 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.126.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-e61ddff57a?timeout=10s\": dial tcp 188.245.126.139:6443: connect: connection refused" interval="800ms" May 17 00:09:05.248165 kubelet[2333]: I0517 00:09:05.247730 2333 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-e61ddff57a" May 17 00:09:05.248329 kubelet[2333]: E0517 00:09:05.248189 2333 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://188.245.126.139:6443/api/v1/nodes\": dial tcp 188.245.126.139:6443: connect: connection refused" node="ci-4081-3-3-n-e61ddff57a" May 17 00:09:05.356653 kubelet[2333]: W0517 00:09:05.356598 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://188.245.126.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.126.139:6443: connect: connection refused May 17 00:09:05.356653 kubelet[2333]: E0517 00:09:05.356676 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://188.245.126.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 188.245.126.139:6443: connect: connection refused" logger="UnhandledError" May 17 00:09:05.451147 kubelet[2333]: W0517 00:09:05.450956 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.245.126.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-e61ddff57a&limit=500&resourceVersion=0": dial tcp 188.245.126.139:6443: connect: connection refused May 17 00:09:05.451147 kubelet[2333]: E0517 00:09:05.451061 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://188.245.126.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-e61ddff57a&limit=500&resourceVersion=0\": dial tcp 188.245.126.139:6443: connect: connection refused" logger="UnhandledError" May 17 00:09:05.499635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount249624969.mount: Deactivated successfully. May 17 00:09:05.508734 containerd[1482]: time="2025-05-17T00:09:05.507946606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:09:05.509512 containerd[1482]: time="2025-05-17T00:09:05.509468213Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" May 17 00:09:05.512523 containerd[1482]: time="2025-05-17T00:09:05.512450386Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:09:05.514663 containerd[1482]: time="2025-05-17T00:09:05.514530211Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:09:05.515874 containerd[1482]: time="2025-05-17T00:09:05.515823371Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:09:05.517499 containerd[1482]: time="2025-05-17T00:09:05.517327498Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:09:05.518713 containerd[1482]: time="2025-05-17T00:09:05.518648539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:09:05.520800 containerd[1482]: time="2025-05-17T00:09:05.519596208Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 601.386018ms" May 17 00:09:05.520800 containerd[1482]: time="2025-05-17T00:09:05.520046662Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:09:05.526300 containerd[1482]: time="2025-05-17T00:09:05.525922325Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 600.976129ms" May 17 00:09:05.529250 containerd[1482]: time="2025-05-17T00:09:05.529203947Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 626.015385ms" May 17 00:09:05.642533 containerd[1482]: time="2025-05-17T00:09:05.642412228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:05.642533 containerd[1482]: time="2025-05-17T00:09:05.642467990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:05.642533 containerd[1482]: time="2025-05-17T00:09:05.642493030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:05.642892 containerd[1482]: time="2025-05-17T00:09:05.642589313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:05.645101 containerd[1482]: time="2025-05-17T00:09:05.644775661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:05.645101 containerd[1482]: time="2025-05-17T00:09:05.644904385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:05.645101 containerd[1482]: time="2025-05-17T00:09:05.644923506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:05.645101 containerd[1482]: time="2025-05-17T00:09:05.645017389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:05.653914 containerd[1482]: time="2025-05-17T00:09:05.653599456Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:05.654844 containerd[1482]: time="2025-05-17T00:09:05.654783173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:05.655144 containerd[1482]: time="2025-05-17T00:09:05.654965138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:05.657205 containerd[1482]: time="2025-05-17T00:09:05.656569988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:05.668045 systemd[1]: Started cri-containerd-c1f09a6b1468910c29c41a2ba2731c5222ac5386fff8f7c93db4335a39487c60.scope - libcontainer container c1f09a6b1468910c29c41a2ba2731c5222ac5386fff8f7c93db4335a39487c60. May 17 00:09:05.694092 systemd[1]: Started cri-containerd-28f7da874c647bd4dbbdf9388053f2af900f882482cad813a241ecdc370b8bcc.scope - libcontainer container 28f7da874c647bd4dbbdf9388053f2af900f882482cad813a241ecdc370b8bcc. May 17 00:09:05.703635 systemd[1]: Started cri-containerd-2143ff419bb4849afd5c914d6e65bcb9278e5e72bd0f41d836dd65c47bc23173.scope - libcontainer container 2143ff419bb4849afd5c914d6e65bcb9278e5e72bd0f41d836dd65c47bc23173. May 17 00:09:05.752748 containerd[1482]: time="2025-05-17T00:09:05.752220563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-e61ddff57a,Uid:17bbdbff10760d89bba8c55fa3866c4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"28f7da874c647bd4dbbdf9388053f2af900f882482cad813a241ecdc370b8bcc\"" May 17 00:09:05.757223 kubelet[2333]: W0517 00:09:05.756795 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://188.245.126.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.126.139:6443: connect: connection refused May 17 00:09:05.757223 kubelet[2333]: E0517 00:09:05.757172 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://188.245.126.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 188.245.126.139:6443: connect: connection refused" logger="UnhandledError" May 17 00:09:05.763821 containerd[1482]: time="2025-05-17T00:09:05.763389710Z" level=info msg="CreateContainer within sandbox \"28f7da874c647bd4dbbdf9388053f2af900f882482cad813a241ecdc370b8bcc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:09:05.765947 containerd[1482]: time="2025-05-17T00:09:05.765649100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-e61ddff57a,Uid:1784941641dbc07410c4436487bccf30,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1f09a6b1468910c29c41a2ba2731c5222ac5386fff8f7c93db4335a39487c60\"" May 17 00:09:05.770676 containerd[1482]: time="2025-05-17T00:09:05.770572894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-e61ddff57a,Uid:82638968ed4b293c6103958541c34e92,Namespace:kube-system,Attempt:0,} returns sandbox id \"2143ff419bb4849afd5c914d6e65bcb9278e5e72bd0f41d836dd65c47bc23173\"" May 17 00:09:05.772127 containerd[1482]: time="2025-05-17T00:09:05.771963697Z" level=info msg="CreateContainer within sandbox \"c1f09a6b1468910c29c41a2ba2731c5222ac5386fff8f7c93db4335a39487c60\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:09:05.775246 containerd[1482]: time="2025-05-17T00:09:05.775055633Z" level=info msg="CreateContainer within sandbox \"2143ff419bb4849afd5c914d6e65bcb9278e5e72bd0f41d836dd65c47bc23173\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:09:05.788325 containerd[1482]: time="2025-05-17T00:09:05.788265644Z" level=info msg="CreateContainer within sandbox \"28f7da874c647bd4dbbdf9388053f2af900f882482cad813a241ecdc370b8bcc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"30d9e051209ed3fc4d70818d3b1aa26d81f59527d073e80890b6fa9afe398140\"" May 17 00:09:05.789260 containerd[1482]: time="2025-05-17T00:09:05.789224354Z" level=info msg="StartContainer for \"30d9e051209ed3fc4d70818d3b1aa26d81f59527d073e80890b6fa9afe398140\"" May 17 00:09:05.797036 containerd[1482]: time="2025-05-17T00:09:05.796977795Z" level=info msg="CreateContainer within sandbox \"2143ff419bb4849afd5c914d6e65bcb9278e5e72bd0f41d836dd65c47bc23173\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bc46e3a34892e61850bcb99db0c056585834e1119c73aa534199d69b7bd8404c\"" May 17 00:09:05.798055 containerd[1482]: time="2025-05-17T00:09:05.797930584Z" level=info msg="CreateContainer within sandbox \"c1f09a6b1468910c29c41a2ba2731c5222ac5386fff8f7c93db4335a39487c60\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c195f1da29771002b3c0e423468773757537820b96cd46d4e8097503eb7380fd\"" May 17 00:09:05.798812 containerd[1482]: time="2025-05-17T00:09:05.798427320Z" level=info msg="StartContainer for \"bc46e3a34892e61850bcb99db0c056585834e1119c73aa534199d69b7bd8404c\"" May 17 00:09:05.798812 containerd[1482]: time="2025-05-17T00:09:05.798652527Z" level=info msg="StartContainer for \"c195f1da29771002b3c0e423468773757537820b96cd46d4e8097503eb7380fd\"" May 17 00:09:05.804120 kubelet[2333]: W0517 00:09:05.804046 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://188.245.126.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 188.245.126.139:6443: connect: connection refused May 17 00:09:05.804782 kubelet[2333]: E0517 00:09:05.804322 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://188.245.126.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 188.245.126.139:6443: connect: connection refused" logger="UnhandledError" May 17 00:09:05.832093 systemd[1]: Started cri-containerd-30d9e051209ed3fc4d70818d3b1aa26d81f59527d073e80890b6fa9afe398140.scope - libcontainer container 30d9e051209ed3fc4d70818d3b1aa26d81f59527d073e80890b6fa9afe398140. May 17 00:09:05.851633 systemd[1]: Started cri-containerd-bc46e3a34892e61850bcb99db0c056585834e1119c73aa534199d69b7bd8404c.scope - libcontainer container bc46e3a34892e61850bcb99db0c056585834e1119c73aa534199d69b7bd8404c. May 17 00:09:05.863039 systemd[1]: Started cri-containerd-c195f1da29771002b3c0e423468773757537820b96cd46d4e8097503eb7380fd.scope - libcontainer container c195f1da29771002b3c0e423468773757537820b96cd46d4e8097503eb7380fd. May 17 00:09:05.871253 kubelet[2333]: E0517 00:09:05.869926 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.126.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-e61ddff57a?timeout=10s\": dial tcp 188.245.126.139:6443: connect: connection refused" interval="1.6s" May 17 00:09:05.901785 kubelet[2333]: E0517 00:09:05.900279 2333 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://188.245.126.139:6443/api/v1/namespaces/default/events\": dial tcp 188.245.126.139:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-n-e61ddff57a.184027eab5bbfe42 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-n-e61ddff57a,UID:ci-4081-3-3-n-e61ddff57a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-e61ddff57a,},FirstTimestamp:2025-05-17 00:09:04.444579394 +0000 UTC m=+1.009501709,LastTimestamp:2025-05-17 00:09:04.444579394 +0000 UTC m=+1.009501709,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-e61ddff57a,}" May 17 00:09:05.904976 containerd[1482]: time="2025-05-17T00:09:05.904611022Z" level=info msg="StartContainer for \"30d9e051209ed3fc4d70818d3b1aa26d81f59527d073e80890b6fa9afe398140\" returns successfully" May 17 00:09:05.932555 containerd[1482]: time="2025-05-17T00:09:05.932435887Z" level=info msg="StartContainer for \"bc46e3a34892e61850bcb99db0c056585834e1119c73aa534199d69b7bd8404c\" returns successfully" May 17 00:09:05.932555 containerd[1482]: time="2025-05-17T00:09:05.932442407Z" level=info msg="StartContainer for \"c195f1da29771002b3c0e423468773757537820b96cd46d4e8097503eb7380fd\" returns successfully" May 17 00:09:06.051124 kubelet[2333]: I0517 00:09:06.050958 2333 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-e61ddff57a" May 17 00:09:06.051422 kubelet[2333]: E0517 00:09:06.051379 2333 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://188.245.126.139:6443/api/v1/nodes\": dial tcp 188.245.126.139:6443: connect: connection refused" node="ci-4081-3-3-n-e61ddff57a" May 17 00:09:07.653815 kubelet[2333]: I0517 00:09:07.653434 2333 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-e61ddff57a" May 17 00:09:08.152490 kubelet[2333]: E0517 00:09:08.152440 2333 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-3-n-e61ddff57a\" not found" node="ci-4081-3-3-n-e61ddff57a" May 17 00:09:08.252789 kubelet[2333]: I0517 00:09:08.251600 2333 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-3-n-e61ddff57a" May 17 00:09:08.442157 kubelet[2333]: I0517 00:09:08.441815 2333 apiserver.go:52] "Watching apiserver" May 17 00:09:08.455671 kubelet[2333]: I0517 00:09:08.455598 2333 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:09:10.382706 systemd[1]: Reloading requested from client PID 2607 ('systemctl') (unit session-7.scope)... May 17 00:09:10.382727 systemd[1]: Reloading... May 17 00:09:10.507792 zram_generator::config[2648]: No configuration found. May 17 00:09:10.628172 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:09:10.716471 systemd[1]: Reloading finished in 333 ms. May 17 00:09:10.760623 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:09:10.775266 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:09:10.775608 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:09:10.775677 systemd[1]: kubelet.service: Consumed 1.473s CPU time, 127.3M memory peak, 0B memory swap peak. May 17 00:09:10.781150 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:09:10.947045 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:09:10.947442 (kubelet)[2692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:09:11.002881 kubelet[2692]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:09:11.002881 kubelet[2692]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:09:11.002881 kubelet[2692]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:09:11.002881 kubelet[2692]: I0517 00:09:11.002533 2692 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:09:11.012816 kubelet[2692]: I0517 00:09:11.012333 2692 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:09:11.012816 kubelet[2692]: I0517 00:09:11.012375 2692 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:09:11.012816 kubelet[2692]: I0517 00:09:11.012746 2692 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:09:11.019626 kubelet[2692]: I0517 00:09:11.019043 2692 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:09:11.022580 kubelet[2692]: I0517 00:09:11.022370 2692 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:09:11.029382 kubelet[2692]: E0517 00:09:11.029008 2692 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:09:11.029382 kubelet[2692]: I0517 00:09:11.029049 2692 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:09:11.031373 kubelet[2692]: I0517 00:09:11.031316 2692 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:09:11.031696 kubelet[2692]: I0517 00:09:11.031517 2692 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:09:11.031696 kubelet[2692]: I0517 00:09:11.031616 2692 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:09:11.034224 kubelet[2692]: I0517 00:09:11.031648 2692 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-e61ddff57a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:09:11.034224 kubelet[2692]: I0517 00:09:11.031876 2692 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:09:11.034224 kubelet[2692]: I0517 00:09:11.031887 2692 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:09:11.034224 kubelet[2692]: I0517 00:09:11.031925 2692 state_mem.go:36] "Initialized new in-memory state store" May 17 00:09:11.034224 kubelet[2692]: I0517 00:09:11.032029 2692 kubelet.go:408] "Attempting to sync node with API server" May 17 00:09:11.034508 kubelet[2692]: I0517 00:09:11.032042 2692 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:09:11.034508 kubelet[2692]: I0517 00:09:11.032061 2692 kubelet.go:314] "Adding apiserver pod source" May 17 00:09:11.034508 kubelet[2692]: I0517 00:09:11.032075 2692 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:09:11.034508 kubelet[2692]: I0517 00:09:11.034206 2692 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:09:11.035847 kubelet[2692]: I0517 00:09:11.034739 2692 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:09:11.035847 kubelet[2692]: I0517 00:09:11.035301 2692 server.go:1274] "Started kubelet" May 17 00:09:11.038555 kubelet[2692]: I0517 00:09:11.038520 2692 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:09:11.051166 kubelet[2692]: I0517 00:09:11.051120 2692 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:09:11.051390 kubelet[2692]: E0517 00:09:11.051359 2692 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-e61ddff57a\" not found" May 17 00:09:11.052325 kubelet[2692]: E0517 00:09:11.051627 2692 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:09:11.052325 kubelet[2692]: I0517 00:09:11.051682 2692 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:09:11.052325 kubelet[2692]: I0517 00:09:11.051904 2692 reconciler.go:26] "Reconciler: start to sync state" May 17 00:09:11.058730 kubelet[2692]: I0517 00:09:11.053956 2692 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:09:11.058730 kubelet[2692]: I0517 00:09:11.055647 2692 server.go:449] "Adding debug handlers to kubelet server" May 17 00:09:11.060803 kubelet[2692]: I0517 00:09:11.060158 2692 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:09:11.062786 kubelet[2692]: I0517 00:09:11.061561 2692 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:09:11.070845 kubelet[2692]: I0517 00:09:11.068500 2692 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:09:11.071677 kubelet[2692]: I0517 00:09:11.069836 2692 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:09:11.096812 kubelet[2692]: I0517 00:09:11.096053 2692 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:09:11.096812 kubelet[2692]: I0517 00:09:11.096087 2692 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:09:11.096812 kubelet[2692]: I0517 00:09:11.096104 2692 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:09:11.096812 kubelet[2692]: E0517 00:09:11.096160 2692 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:09:11.115608 kubelet[2692]: I0517 00:09:11.115400 2692 factory.go:221] Registration of the containerd container factory successfully May 17 00:09:11.115608 kubelet[2692]: I0517 00:09:11.115424 2692 factory.go:221] Registration of the systemd container factory successfully May 17 00:09:11.115608 kubelet[2692]: I0517 00:09:11.115558 2692 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:09:11.192325 kubelet[2692]: I0517 00:09:11.192257 2692 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:09:11.192325 kubelet[2692]: I0517 00:09:11.192300 2692 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:09:11.192325 kubelet[2692]: I0517 00:09:11.192327 2692 state_mem.go:36] "Initialized new in-memory state store" May 17 00:09:11.192789 kubelet[2692]: I0517 00:09:11.192592 2692 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:09:11.192789 kubelet[2692]: I0517 00:09:11.192605 2692 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:09:11.192789 kubelet[2692]: I0517 00:09:11.192627 2692 policy_none.go:49] "None policy: Start" May 17 00:09:11.194903 kubelet[2692]: I0517 00:09:11.194541 2692 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:09:11.194903 kubelet[2692]: I0517 00:09:11.194586 2692 state_mem.go:35] "Initializing new in-memory state store" May 17 00:09:11.194903 kubelet[2692]: I0517 00:09:11.194799 2692 state_mem.go:75] "Updated machine memory state" May 17 00:09:11.196315 kubelet[2692]: E0517 00:09:11.196248 2692 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 00:09:11.204871 kubelet[2692]: I0517 00:09:11.204596 2692 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:09:11.204871 kubelet[2692]: I0517 00:09:11.204837 2692 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:09:11.204871 kubelet[2692]: I0517 00:09:11.204850 2692 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:09:11.205896 kubelet[2692]: I0517 00:09:11.205543 2692 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:09:11.320440 kubelet[2692]: I0517 00:09:11.320196 2692 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-e61ddff57a" May 17 00:09:11.331634 kubelet[2692]: I0517 00:09:11.331403 2692 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-3-n-e61ddff57a" May 17 00:09:11.331634 kubelet[2692]: I0517 00:09:11.331532 2692 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-3-n-e61ddff57a" May 17 00:09:11.411720 kubelet[2692]: E0517 00:09:11.411602 2692 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-3-n-e61ddff57a\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-e61ddff57a" May 17 00:09:11.454431 kubelet[2692]: I0517 00:09:11.454377 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/82638968ed4b293c6103958541c34e92-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-e61ddff57a\" (UID: \"82638968ed4b293c6103958541c34e92\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-e61ddff57a" May 17 00:09:11.454431 kubelet[2692]: I0517 00:09:11.454433 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/17bbdbff10760d89bba8c55fa3866c4f-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-e61ddff57a\" (UID: \"17bbdbff10760d89bba8c55fa3866c4f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-e61ddff57a" May 17 00:09:11.454628 kubelet[2692]: I0517 00:09:11.454459 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/17bbdbff10760d89bba8c55fa3866c4f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-e61ddff57a\" (UID: \"17bbdbff10760d89bba8c55fa3866c4f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-e61ddff57a" May 17 00:09:11.454628 kubelet[2692]: I0517 00:09:11.454491 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1784941641dbc07410c4436487bccf30-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-e61ddff57a\" (UID: \"1784941641dbc07410c4436487bccf30\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-e61ddff57a" May 17 00:09:11.454628 kubelet[2692]: I0517 00:09:11.454513 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/82638968ed4b293c6103958541c34e92-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-e61ddff57a\" (UID: \"82638968ed4b293c6103958541c34e92\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-e61ddff57a" May 17 00:09:11.455428 kubelet[2692]: I0517 00:09:11.455395 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/17bbdbff10760d89bba8c55fa3866c4f-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-e61ddff57a\" (UID: \"17bbdbff10760d89bba8c55fa3866c4f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-e61ddff57a" May 17 00:09:11.455560 kubelet[2692]: I0517 00:09:11.455438 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/17bbdbff10760d89bba8c55fa3866c4f-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-e61ddff57a\" (UID: \"17bbdbff10760d89bba8c55fa3866c4f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-e61ddff57a" May 17 00:09:11.455560 kubelet[2692]: I0517 00:09:11.455488 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/17bbdbff10760d89bba8c55fa3866c4f-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-e61ddff57a\" (UID: \"17bbdbff10760d89bba8c55fa3866c4f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-e61ddff57a" May 17 00:09:11.455560 kubelet[2692]: I0517 00:09:11.455517 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/82638968ed4b293c6103958541c34e92-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-e61ddff57a\" (UID: \"82638968ed4b293c6103958541c34e92\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-e61ddff57a" May 17 00:09:12.034863 kubelet[2692]: I0517 00:09:12.033971 2692 apiserver.go:52] "Watching apiserver" May 17 00:09:12.053080 kubelet[2692]: I0517 00:09:12.053038 2692 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:09:12.165831 kubelet[2692]: E0517 00:09:12.164222 2692 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-3-n-e61ddff57a\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-3-n-e61ddff57a" May 17 00:09:12.191811 kubelet[2692]: I0517 00:09:12.190589 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-3-n-e61ddff57a" podStartSLOduration=1.190549885 podStartE2EDuration="1.190549885s" podCreationTimestamp="2025-05-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:09:12.186557672 +0000 UTC m=+1.232757075" watchObservedRunningTime="2025-05-17 00:09:12.190549885 +0000 UTC m=+1.236749248" May 17 00:09:12.201714 kubelet[2692]: I0517 00:09:12.201516 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-e61ddff57a" podStartSLOduration=3.201456651 podStartE2EDuration="3.201456651s" podCreationTimestamp="2025-05-17 00:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:09:12.200807775 +0000 UTC m=+1.247007138" watchObservedRunningTime="2025-05-17 00:09:12.201456651 +0000 UTC m=+1.247656014" May 17 00:09:12.214597 kubelet[2692]: I0517 00:09:12.214200 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-3-n-e61ddff57a" podStartSLOduration=1.214174045 podStartE2EDuration="1.214174045s" podCreationTimestamp="2025-05-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:09:12.212927053 +0000 UTC m=+1.259126456" watchObservedRunningTime="2025-05-17 00:09:12.214174045 +0000 UTC m=+1.260373528" May 17 00:09:15.721350 kubelet[2692]: I0517 00:09:15.721179 2692 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:09:15.721730 containerd[1482]: time="2025-05-17T00:09:15.721603866Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:09:15.722072 kubelet[2692]: I0517 00:09:15.721939 2692 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:09:16.633788 systemd[1]: Created slice kubepods-besteffort-pod73ecafb4_890e_499e_88e4_bd44c1b42752.slice - libcontainer container kubepods-besteffort-pod73ecafb4_890e_499e_88e4_bd44c1b42752.slice. May 17 00:09:16.688939 kubelet[2692]: I0517 00:09:16.688835 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/73ecafb4-890e-499e-88e4-bd44c1b42752-kube-proxy\") pod \"kube-proxy-9g6vs\" (UID: \"73ecafb4-890e-499e-88e4-bd44c1b42752\") " pod="kube-system/kube-proxy-9g6vs" May 17 00:09:16.688939 kubelet[2692]: I0517 00:09:16.688920 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73ecafb4-890e-499e-88e4-bd44c1b42752-xtables-lock\") pod \"kube-proxy-9g6vs\" (UID: \"73ecafb4-890e-499e-88e4-bd44c1b42752\") " pod="kube-system/kube-proxy-9g6vs" May 17 00:09:16.689169 kubelet[2692]: I0517 00:09:16.688972 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73ecafb4-890e-499e-88e4-bd44c1b42752-lib-modules\") pod \"kube-proxy-9g6vs\" (UID: \"73ecafb4-890e-499e-88e4-bd44c1b42752\") " pod="kube-system/kube-proxy-9g6vs" May 17 00:09:16.689169 kubelet[2692]: I0517 00:09:16.689011 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dm7r\" (UniqueName: \"kubernetes.io/projected/73ecafb4-890e-499e-88e4-bd44c1b42752-kube-api-access-2dm7r\") pod \"kube-proxy-9g6vs\" (UID: \"73ecafb4-890e-499e-88e4-bd44c1b42752\") " pod="kube-system/kube-proxy-9g6vs" May 17 00:09:16.865981 systemd[1]: Created slice kubepods-besteffort-pod729e93f5_88f7_42a2_b081_df4210270944.slice - libcontainer container kubepods-besteffort-pod729e93f5_88f7_42a2_b081_df4210270944.slice. May 17 00:09:16.890814 kubelet[2692]: I0517 00:09:16.890572 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/729e93f5-88f7-42a2-b081-df4210270944-var-lib-calico\") pod \"tigera-operator-7c5755cdcb-6mhqk\" (UID: \"729e93f5-88f7-42a2-b081-df4210270944\") " pod="tigera-operator/tigera-operator-7c5755cdcb-6mhqk" May 17 00:09:16.890814 kubelet[2692]: I0517 00:09:16.890660 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqphm\" (UniqueName: \"kubernetes.io/projected/729e93f5-88f7-42a2-b081-df4210270944-kube-api-access-wqphm\") pod \"tigera-operator-7c5755cdcb-6mhqk\" (UID: \"729e93f5-88f7-42a2-b081-df4210270944\") " pod="tigera-operator/tigera-operator-7c5755cdcb-6mhqk" May 17 00:09:16.946200 containerd[1482]: time="2025-05-17T00:09:16.944410732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9g6vs,Uid:73ecafb4-890e-499e-88e4-bd44c1b42752,Namespace:kube-system,Attempt:0,}" May 17 00:09:16.970893 containerd[1482]: time="2025-05-17T00:09:16.970634900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:16.970893 containerd[1482]: time="2025-05-17T00:09:16.970717380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:16.970893 containerd[1482]: time="2025-05-17T00:09:16.970739500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:16.972092 containerd[1482]: time="2025-05-17T00:09:16.971989176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:17.009153 systemd[1]: Started cri-containerd-5fedc0ecd9c971f3e69171e9a57aeb8c1ef6f6c134bcaafb5596f51c990c0afb.scope - libcontainer container 5fedc0ecd9c971f3e69171e9a57aeb8c1ef6f6c134bcaafb5596f51c990c0afb. May 17 00:09:17.040799 containerd[1482]: time="2025-05-17T00:09:17.040730905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9g6vs,Uid:73ecafb4-890e-499e-88e4-bd44c1b42752,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fedc0ecd9c971f3e69171e9a57aeb8c1ef6f6c134bcaafb5596f51c990c0afb\"" May 17 00:09:17.047175 containerd[1482]: time="2025-05-17T00:09:17.047110254Z" level=info msg="CreateContainer within sandbox \"5fedc0ecd9c971f3e69171e9a57aeb8c1ef6f6c134bcaafb5596f51c990c0afb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:09:17.061082 containerd[1482]: time="2025-05-17T00:09:17.061034149Z" level=info msg="CreateContainer within sandbox \"5fedc0ecd9c971f3e69171e9a57aeb8c1ef6f6c134bcaafb5596f51c990c0afb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0035033b0f5a8b3a0509dde9ab1a68e7be89f244a58af32a3432dad556267a9a\"" May 17 00:09:17.062140 containerd[1482]: time="2025-05-17T00:09:17.062095787Z" level=info msg="StartContainer for \"0035033b0f5a8b3a0509dde9ab1a68e7be89f244a58af32a3432dad556267a9a\"" May 17 00:09:17.097988 systemd[1]: Started cri-containerd-0035033b0f5a8b3a0509dde9ab1a68e7be89f244a58af32a3432dad556267a9a.scope - libcontainer container 0035033b0f5a8b3a0509dde9ab1a68e7be89f244a58af32a3432dad556267a9a. May 17 00:09:17.133585 containerd[1482]: time="2025-05-17T00:09:17.133512138Z" level=info msg="StartContainer for \"0035033b0f5a8b3a0509dde9ab1a68e7be89f244a58af32a3432dad556267a9a\" returns successfully" May 17 00:09:17.171636 containerd[1482]: time="2025-05-17T00:09:17.170928630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-6mhqk,Uid:729e93f5-88f7-42a2-b081-df4210270944,Namespace:tigera-operator,Attempt:0,}" May 17 00:09:17.203476 containerd[1482]: time="2025-05-17T00:09:17.203315491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:17.203476 containerd[1482]: time="2025-05-17T00:09:17.203396291Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:17.203792 containerd[1482]: time="2025-05-17T00:09:17.203407731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:17.203792 containerd[1482]: time="2025-05-17T00:09:17.203568611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:17.227951 systemd[1]: Started cri-containerd-7b073911728d0a300ba7dc9982b85c3bf71dd6def0499491e191182dd757ea99.scope - libcontainer container 7b073911728d0a300ba7dc9982b85c3bf71dd6def0499491e191182dd757ea99. May 17 00:09:17.280715 containerd[1482]: time="2025-05-17T00:09:17.280510792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-6mhqk,Uid:729e93f5-88f7-42a2-b081-df4210270944,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7b073911728d0a300ba7dc9982b85c3bf71dd6def0499491e191182dd757ea99\"" May 17 00:09:17.283439 containerd[1482]: time="2025-05-17T00:09:17.283219387Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 17 00:09:17.316174 kubelet[2692]: I0517 00:09:17.315674 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9g6vs" podStartSLOduration=1.315653008 podStartE2EDuration="1.315653008s" podCreationTimestamp="2025-05-17 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:09:17.185011284 +0000 UTC m=+6.231210647" watchObservedRunningTime="2025-05-17 00:09:17.315653008 +0000 UTC m=+6.361852371" May 17 00:09:19.087133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4155521719.mount: Deactivated successfully. May 17 00:09:19.532556 containerd[1482]: time="2025-05-17T00:09:19.532496919Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:19.533858 containerd[1482]: time="2025-05-17T00:09:19.533616639Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=22143480" May 17 00:09:19.534807 containerd[1482]: time="2025-05-17T00:09:19.534630519Z" level=info msg="ImageCreate event name:\"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:19.539128 containerd[1482]: time="2025-05-17T00:09:19.539083679Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:19.543310 containerd[1482]: time="2025-05-17T00:09:19.542951439Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"22139475\" in 2.259577652s" May 17 00:09:19.543310 containerd[1482]: time="2025-05-17T00:09:19.542998679Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\"" May 17 00:09:19.549388 containerd[1482]: time="2025-05-17T00:09:19.548715999Z" level=info msg="CreateContainer within sandbox \"7b073911728d0a300ba7dc9982b85c3bf71dd6def0499491e191182dd757ea99\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 17 00:09:19.566807 containerd[1482]: time="2025-05-17T00:09:19.566692478Z" level=info msg="CreateContainer within sandbox \"7b073911728d0a300ba7dc9982b85c3bf71dd6def0499491e191182dd757ea99\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5ea77333faf87acabffc0fc4f9acff8d8ab7e7689df0ac299455dbd5401212cf\"" May 17 00:09:19.569494 containerd[1482]: time="2025-05-17T00:09:19.569169438Z" level=info msg="StartContainer for \"5ea77333faf87acabffc0fc4f9acff8d8ab7e7689df0ac299455dbd5401212cf\"" May 17 00:09:19.601057 systemd[1]: Started cri-containerd-5ea77333faf87acabffc0fc4f9acff8d8ab7e7689df0ac299455dbd5401212cf.scope - libcontainer container 5ea77333faf87acabffc0fc4f9acff8d8ab7e7689df0ac299455dbd5401212cf. May 17 00:09:19.630605 containerd[1482]: time="2025-05-17T00:09:19.630386275Z" level=info msg="StartContainer for \"5ea77333faf87acabffc0fc4f9acff8d8ab7e7689df0ac299455dbd5401212cf\" returns successfully" May 17 00:09:22.557954 kubelet[2692]: I0517 00:09:22.557720 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7c5755cdcb-6mhqk" podStartSLOduration=4.295077787 podStartE2EDuration="6.557702158s" podCreationTimestamp="2025-05-17 00:09:16 +0000 UTC" firstStartedPulling="2025-05-17 00:09:17.282501348 +0000 UTC m=+6.328700711" lastFinishedPulling="2025-05-17 00:09:19.545125719 +0000 UTC m=+8.591325082" observedRunningTime="2025-05-17 00:09:20.191139935 +0000 UTC m=+9.237339338" watchObservedRunningTime="2025-05-17 00:09:22.557702158 +0000 UTC m=+11.603901521" May 17 00:09:26.001475 sudo[1850]: pam_unix(sudo:session): session closed for user root May 17 00:09:26.164143 sshd[1847]: pam_unix(sshd:session): session closed for user core May 17 00:09:26.168909 systemd[1]: sshd@7-188.245.126.139:22-139.178.68.195:56728.service: Deactivated successfully. May 17 00:09:26.175033 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:09:26.175490 systemd[1]: session-7.scope: Consumed 7.094s CPU time, 149.8M memory peak, 0B memory swap peak. May 17 00:09:26.180161 systemd-logind[1461]: Session 7 logged out. Waiting for processes to exit. May 17 00:09:26.181880 systemd-logind[1461]: Removed session 7. May 17 00:09:34.302721 systemd[1]: Created slice kubepods-besteffort-pod52ecb16a_d081_4af4_a6c5_f736a996b8f9.slice - libcontainer container kubepods-besteffort-pod52ecb16a_d081_4af4_a6c5_f736a996b8f9.slice. May 17 00:09:34.399550 kubelet[2692]: I0517 00:09:34.398740 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52ecb16a-d081-4af4-a6c5-f736a996b8f9-tigera-ca-bundle\") pod \"calico-typha-7cc9d6d8d-wqzrx\" (UID: \"52ecb16a-d081-4af4-a6c5-f736a996b8f9\") " pod="calico-system/calico-typha-7cc9d6d8d-wqzrx" May 17 00:09:34.399550 kubelet[2692]: I0517 00:09:34.398814 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/52ecb16a-d081-4af4-a6c5-f736a996b8f9-typha-certs\") pod \"calico-typha-7cc9d6d8d-wqzrx\" (UID: \"52ecb16a-d081-4af4-a6c5-f736a996b8f9\") " pod="calico-system/calico-typha-7cc9d6d8d-wqzrx" May 17 00:09:34.399550 kubelet[2692]: I0517 00:09:34.398837 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp5lv\" (UniqueName: \"kubernetes.io/projected/52ecb16a-d081-4af4-a6c5-f736a996b8f9-kube-api-access-jp5lv\") pod \"calico-typha-7cc9d6d8d-wqzrx\" (UID: \"52ecb16a-d081-4af4-a6c5-f736a996b8f9\") " pod="calico-system/calico-typha-7cc9d6d8d-wqzrx" May 17 00:09:34.531749 systemd[1]: Created slice kubepods-besteffort-pod3f9baa28_81bf_4803_89f2_f7d2376ef919.slice - libcontainer container kubepods-besteffort-pod3f9baa28_81bf_4803_89f2_f7d2376ef919.slice. May 17 00:09:34.607938 containerd[1482]: time="2025-05-17T00:09:34.607814286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cc9d6d8d-wqzrx,Uid:52ecb16a-d081-4af4-a6c5-f736a996b8f9,Namespace:calico-system,Attempt:0,}" May 17 00:09:34.661134 containerd[1482]: time="2025-05-17T00:09:34.659903698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:34.661134 containerd[1482]: time="2025-05-17T00:09:34.659982898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:34.661134 containerd[1482]: time="2025-05-17T00:09:34.660002299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:34.661134 containerd[1482]: time="2025-05-17T00:09:34.660086019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:34.702124 systemd[1]: Started cri-containerd-a587793d957c86bb5efc490fe545d2220b2390e757a71bd37f828652f1118543.scope - libcontainer container a587793d957c86bb5efc490fe545d2220b2390e757a71bd37f828652f1118543. May 17 00:09:34.706896 kubelet[2692]: I0517 00:09:34.705394 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3f9baa28-81bf-4803-89f2-f7d2376ef919-cni-log-dir\") pod \"calico-node-pqvnr\" (UID: \"3f9baa28-81bf-4803-89f2-f7d2376ef919\") " pod="calico-system/calico-node-pqvnr" May 17 00:09:34.706896 kubelet[2692]: I0517 00:09:34.705687 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f9baa28-81bf-4803-89f2-f7d2376ef919-tigera-ca-bundle\") pod \"calico-node-pqvnr\" (UID: \"3f9baa28-81bf-4803-89f2-f7d2376ef919\") " pod="calico-system/calico-node-pqvnr" May 17 00:09:34.706896 kubelet[2692]: I0517 00:09:34.705711 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3f9baa28-81bf-4803-89f2-f7d2376ef919-policysync\") pod \"calico-node-pqvnr\" (UID: \"3f9baa28-81bf-4803-89f2-f7d2376ef919\") " pod="calico-system/calico-node-pqvnr" May 17 00:09:34.706896 kubelet[2692]: I0517 00:09:34.705729 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3f9baa28-81bf-4803-89f2-f7d2376ef919-cni-net-dir\") pod \"calico-node-pqvnr\" (UID: \"3f9baa28-81bf-4803-89f2-f7d2376ef919\") " pod="calico-system/calico-node-pqvnr" May 17 00:09:34.706896 kubelet[2692]: I0517 00:09:34.705750 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3f9baa28-81bf-4803-89f2-f7d2376ef919-flexvol-driver-host\") pod \"calico-node-pqvnr\" (UID: \"3f9baa28-81bf-4803-89f2-f7d2376ef919\") " pod="calico-system/calico-node-pqvnr" May 17 00:09:34.707160 kubelet[2692]: I0517 00:09:34.705781 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3f9baa28-81bf-4803-89f2-f7d2376ef919-var-lib-calico\") pod \"calico-node-pqvnr\" (UID: \"3f9baa28-81bf-4803-89f2-f7d2376ef919\") " pod="calico-system/calico-node-pqvnr" May 17 00:09:34.707160 kubelet[2692]: I0517 00:09:34.705799 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3f9baa28-81bf-4803-89f2-f7d2376ef919-var-run-calico\") pod \"calico-node-pqvnr\" (UID: \"3f9baa28-81bf-4803-89f2-f7d2376ef919\") " pod="calico-system/calico-node-pqvnr" May 17 00:09:34.707160 kubelet[2692]: I0517 00:09:34.705815 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f9baa28-81bf-4803-89f2-f7d2376ef919-xtables-lock\") pod \"calico-node-pqvnr\" (UID: \"3f9baa28-81bf-4803-89f2-f7d2376ef919\") " pod="calico-system/calico-node-pqvnr" May 17 00:09:34.707160 kubelet[2692]: I0517 00:09:34.706417 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3f9baa28-81bf-4803-89f2-f7d2376ef919-cni-bin-dir\") pod \"calico-node-pqvnr\" (UID: \"3f9baa28-81bf-4803-89f2-f7d2376ef919\") " pod="calico-system/calico-node-pqvnr" May 17 00:09:34.707160 kubelet[2692]: I0517 00:09:34.706474 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3f9baa28-81bf-4803-89f2-f7d2376ef919-node-certs\") pod \"calico-node-pqvnr\" (UID: \"3f9baa28-81bf-4803-89f2-f7d2376ef919\") " pod="calico-system/calico-node-pqvnr" May 17 00:09:34.707374 kubelet[2692]: I0517 00:09:34.706526 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f9baa28-81bf-4803-89f2-f7d2376ef919-lib-modules\") pod \"calico-node-pqvnr\" (UID: \"3f9baa28-81bf-4803-89f2-f7d2376ef919\") " pod="calico-system/calico-node-pqvnr" May 17 00:09:34.707374 kubelet[2692]: I0517 00:09:34.706548 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4p6d\" (UniqueName: \"kubernetes.io/projected/3f9baa28-81bf-4803-89f2-f7d2376ef919-kube-api-access-m4p6d\") pod \"calico-node-pqvnr\" (UID: \"3f9baa28-81bf-4803-89f2-f7d2376ef919\") " pod="calico-system/calico-node-pqvnr" May 17 00:09:34.774736 containerd[1482]: time="2025-05-17T00:09:34.774324864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cc9d6d8d-wqzrx,Uid:52ecb16a-d081-4af4-a6c5-f736a996b8f9,Namespace:calico-system,Attempt:0,} returns sandbox id \"a587793d957c86bb5efc490fe545d2220b2390e757a71bd37f828652f1118543\"" May 17 00:09:34.779297 containerd[1482]: time="2025-05-17T00:09:34.779251274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 17 00:09:34.810219 kubelet[2692]: E0517 00:09:34.810175 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.810219 kubelet[2692]: W0517 00:09:34.810208 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.810545 kubelet[2692]: E0517 00:09:34.810240 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.815006 kubelet[2692]: E0517 00:09:34.814295 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.815006 kubelet[2692]: W0517 00:09:34.814332 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.815006 kubelet[2692]: E0517 00:09:34.814357 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.815575 kubelet[2692]: E0517 00:09:34.815424 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.815637 kubelet[2692]: W0517 00:09:34.815588 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.815637 kubelet[2692]: E0517 00:09:34.815616 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.816972 kubelet[2692]: E0517 00:09:34.816859 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.816972 kubelet[2692]: W0517 00:09:34.816889 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.816972 kubelet[2692]: E0517 00:09:34.816911 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.821166 kubelet[2692]: E0517 00:09:34.819205 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.821166 kubelet[2692]: W0517 00:09:34.819234 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.821166 kubelet[2692]: E0517 00:09:34.819257 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.821166 kubelet[2692]: E0517 00:09:34.819662 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hhzhx" podUID="35348305-3030-4a37-b275-4a2201a5f384" May 17 00:09:34.821845 kubelet[2692]: E0517 00:09:34.821689 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.821978 kubelet[2692]: W0517 00:09:34.821959 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.822054 kubelet[2692]: E0517 00:09:34.822042 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.852975 kubelet[2692]: E0517 00:09:34.852934 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.852975 kubelet[2692]: W0517 00:09:34.852964 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.852975 kubelet[2692]: E0517 00:09:34.852989 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.906512 kubelet[2692]: E0517 00:09:34.906427 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.906512 kubelet[2692]: W0517 00:09:34.906499 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.906694 kubelet[2692]: E0517 00:09:34.906533 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.906976 kubelet[2692]: E0517 00:09:34.906747 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.906976 kubelet[2692]: W0517 00:09:34.906773 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.906976 kubelet[2692]: E0517 00:09:34.906797 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.907102 kubelet[2692]: E0517 00:09:34.907022 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.907102 kubelet[2692]: W0517 00:09:34.907033 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.907102 kubelet[2692]: E0517 00:09:34.907043 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.907433 kubelet[2692]: E0517 00:09:34.907387 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.907433 kubelet[2692]: W0517 00:09:34.907409 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.907433 kubelet[2692]: E0517 00:09:34.907422 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.909010 kubelet[2692]: E0517 00:09:34.908979 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.909010 kubelet[2692]: W0517 00:09:34.909000 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.909010 kubelet[2692]: E0517 00:09:34.909012 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.909811 kubelet[2692]: E0517 00:09:34.909178 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.909811 kubelet[2692]: W0517 00:09:34.909191 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.909811 kubelet[2692]: E0517 00:09:34.909200 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.910039 kubelet[2692]: E0517 00:09:34.910008 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.910039 kubelet[2692]: W0517 00:09:34.910026 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.910039 kubelet[2692]: E0517 00:09:34.910038 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.910245 kubelet[2692]: E0517 00:09:34.910226 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.910245 kubelet[2692]: W0517 00:09:34.910240 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.910347 kubelet[2692]: E0517 00:09:34.910249 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.910486 kubelet[2692]: E0517 00:09:34.910425 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.910486 kubelet[2692]: W0517 00:09:34.910450 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.910486 kubelet[2692]: E0517 00:09:34.910459 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.910625 kubelet[2692]: E0517 00:09:34.910604 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.910625 kubelet[2692]: W0517 00:09:34.910617 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.910625 kubelet[2692]: E0517 00:09:34.910627 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.910789 kubelet[2692]: E0517 00:09:34.910756 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.910833 kubelet[2692]: W0517 00:09:34.910793 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.910833 kubelet[2692]: E0517 00:09:34.910803 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.911046 kubelet[2692]: E0517 00:09:34.910945 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.911046 kubelet[2692]: W0517 00:09:34.910952 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.911046 kubelet[2692]: E0517 00:09:34.910960 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.911140 kubelet[2692]: E0517 00:09:34.911131 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.911140 kubelet[2692]: W0517 00:09:34.911139 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.911882 kubelet[2692]: E0517 00:09:34.911147 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.911882 kubelet[2692]: E0517 00:09:34.911284 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.911882 kubelet[2692]: W0517 00:09:34.911291 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.911882 kubelet[2692]: E0517 00:09:34.911298 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.911882 kubelet[2692]: E0517 00:09:34.911495 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.911882 kubelet[2692]: W0517 00:09:34.911505 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.911882 kubelet[2692]: E0517 00:09:34.911517 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.911882 kubelet[2692]: E0517 00:09:34.911690 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.911882 kubelet[2692]: W0517 00:09:34.911698 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.911882 kubelet[2692]: E0517 00:09:34.911706 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.912206 kubelet[2692]: E0517 00:09:34.911946 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.912206 kubelet[2692]: W0517 00:09:34.911955 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.912206 kubelet[2692]: E0517 00:09:34.911963 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.912206 kubelet[2692]: E0517 00:09:34.912131 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.912206 kubelet[2692]: W0517 00:09:34.912139 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.912206 kubelet[2692]: E0517 00:09:34.912148 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.912728 kubelet[2692]: E0517 00:09:34.912699 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.912728 kubelet[2692]: W0517 00:09:34.912717 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.912728 kubelet[2692]: E0517 00:09:34.912727 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.912926 kubelet[2692]: E0517 00:09:34.912904 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.912926 kubelet[2692]: W0517 00:09:34.912919 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.912926 kubelet[2692]: E0517 00:09:34.912929 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.913230 kubelet[2692]: E0517 00:09:34.913210 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.913230 kubelet[2692]: W0517 00:09:34.913224 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.913318 kubelet[2692]: E0517 00:09:34.913236 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.913318 kubelet[2692]: I0517 00:09:34.913261 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/35348305-3030-4a37-b275-4a2201a5f384-varrun\") pod \"csi-node-driver-hhzhx\" (UID: \"35348305-3030-4a37-b275-4a2201a5f384\") " pod="calico-system/csi-node-driver-hhzhx" May 17 00:09:34.914867 kubelet[2692]: E0517 00:09:34.914826 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.914867 kubelet[2692]: W0517 00:09:34.914855 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.914964 kubelet[2692]: E0517 00:09:34.914889 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.914964 kubelet[2692]: I0517 00:09:34.914911 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rclgz\" (UniqueName: \"kubernetes.io/projected/35348305-3030-4a37-b275-4a2201a5f384-kube-api-access-rclgz\") pod \"csi-node-driver-hhzhx\" (UID: \"35348305-3030-4a37-b275-4a2201a5f384\") " pod="calico-system/csi-node-driver-hhzhx" May 17 00:09:34.915170 kubelet[2692]: E0517 00:09:34.915146 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.915170 kubelet[2692]: W0517 00:09:34.915161 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.915366 kubelet[2692]: E0517 00:09:34.915266 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.915366 kubelet[2692]: E0517 00:09:34.915305 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.915366 kubelet[2692]: W0517 00:09:34.915313 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.915366 kubelet[2692]: I0517 00:09:34.915310 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/35348305-3030-4a37-b275-4a2201a5f384-registration-dir\") pod \"csi-node-driver-hhzhx\" (UID: \"35348305-3030-4a37-b275-4a2201a5f384\") " pod="calico-system/csi-node-driver-hhzhx" May 17 00:09:34.915366 kubelet[2692]: E0517 00:09:34.915346 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.915689 kubelet[2692]: E0517 00:09:34.915674 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.915689 kubelet[2692]: W0517 00:09:34.915686 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.915933 kubelet[2692]: E0517 00:09:34.915707 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.915990 kubelet[2692]: E0517 00:09:34.915963 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.915990 kubelet[2692]: W0517 00:09:34.915973 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.916138 kubelet[2692]: E0517 00:09:34.915992 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.916138 kubelet[2692]: I0517 00:09:34.916012 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/35348305-3030-4a37-b275-4a2201a5f384-socket-dir\") pod \"csi-node-driver-hhzhx\" (UID: \"35348305-3030-4a37-b275-4a2201a5f384\") " pod="calico-system/csi-node-driver-hhzhx" May 17 00:09:34.916397 kubelet[2692]: E0517 00:09:34.916221 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.916397 kubelet[2692]: W0517 00:09:34.916239 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.916397 kubelet[2692]: E0517 00:09:34.916263 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.916702 kubelet[2692]: E0517 00:09:34.916517 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.916702 kubelet[2692]: W0517 00:09:34.916527 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.916702 kubelet[2692]: E0517 00:09:34.916541 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.916978 kubelet[2692]: E0517 00:09:34.916790 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.916978 kubelet[2692]: W0517 00:09:34.916801 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.916978 kubelet[2692]: E0517 00:09:34.916823 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.917063 kubelet[2692]: E0517 00:09:34.916981 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.917063 kubelet[2692]: W0517 00:09:34.916990 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.917063 kubelet[2692]: E0517 00:09:34.916999 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.917920 kubelet[2692]: E0517 00:09:34.917166 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.917920 kubelet[2692]: W0517 00:09:34.917183 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.917920 kubelet[2692]: E0517 00:09:34.917204 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.918159 kubelet[2692]: E0517 00:09:34.918138 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.918159 kubelet[2692]: W0517 00:09:34.918157 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.918231 kubelet[2692]: E0517 00:09:34.918183 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.918231 kubelet[2692]: I0517 00:09:34.918207 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35348305-3030-4a37-b275-4a2201a5f384-kubelet-dir\") pod \"csi-node-driver-hhzhx\" (UID: \"35348305-3030-4a37-b275-4a2201a5f384\") " pod="calico-system/csi-node-driver-hhzhx" May 17 00:09:34.918466 kubelet[2692]: E0517 00:09:34.918429 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.918466 kubelet[2692]: W0517 00:09:34.918456 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.918466 kubelet[2692]: E0517 00:09:34.918467 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.918667 kubelet[2692]: E0517 00:09:34.918651 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.918667 kubelet[2692]: W0517 00:09:34.918664 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.919273 kubelet[2692]: E0517 00:09:34.918674 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:34.919592 kubelet[2692]: E0517 00:09:34.919568 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:34.919592 kubelet[2692]: W0517 00:09:34.919587 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:34.919753 kubelet[2692]: E0517 00:09:34.919601 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:35.019871 kubelet[2692]: E0517 00:09:35.019831 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:35.019871 kubelet[2692]: W0517 00:09:35.019858 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:35.019871 kubelet[2692]: E0517 00:09:35.019879 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:35.020072 kubelet[2692]: E0517 00:09:35.020064 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:35.020072 kubelet[2692]: W0517 00:09:35.020072 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:35.020126 kubelet[2692]: E0517 00:09:35.020084 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:35.020373 kubelet[2692]: E0517 00:09:35.020357 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:35.020373 kubelet[2692]: W0517 00:09:35.020370 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:35.021837 kubelet[2692]: E0517 00:09:35.021803 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:35.022151 kubelet[2692]: E0517 00:09:35.022130 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:35.022151 kubelet[2692]: W0517 00:09:35.022150 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:35.022626 kubelet[2692]: E0517 00:09:35.022167 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:35.022839 kubelet[2692]: E0517 00:09:35.022820 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:35.022839 kubelet[2692]: W0517 00:09:35.022836 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:35.023031 kubelet[2692]: E0517 00:09:35.022920 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:35.023113 kubelet[2692]: E0517 00:09:35.023078 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:35.023113 kubelet[2692]: W0517 00:09:35.023092 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:35.023412 kubelet[2692]: E0517 00:09:35.023159 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:35.023412 kubelet[2692]: E0517 00:09:35.023269 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:35.023412 kubelet[2692]: W0517 00:09:35.023276 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:35.023412 kubelet[2692]: E0517 00:09:35.023336 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:35.023653 kubelet[2692]: E0517 00:09:35.023490 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:35.023653 kubelet[2692]: W0517 00:09:35.023499 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:35.023653 kubelet[2692]: E0517 00:09:35.023563 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:35.024058 kubelet[2692]: E0517 00:09:35.023665 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:35.024058 kubelet[2692]: W0517 00:09:35.023672 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:35.024058 kubelet[2692]: E0517 00:09:35.023686 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:35.024058 kubelet[2692]: E0517 00:09:35.023896 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:35.024058 kubelet[2692]: W0517 00:09:35.023905 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:35.024058 kubelet[2692]: E0517 00:09:35.023914 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:35.024870 kubelet[2692]: E0517 00:09:35.024849 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:35.024870 kubelet[2692]: W0517 00:09:35.024863 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:35.024981 kubelet[2692]: E0517 00:09:35.024889 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:35.025107 kubelet[2692]: E0517 00:09:35.025086 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:35.025107 kubelet[2692]: W0517 00:09:35.025100 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:35.025314 kubelet[2692]: E0517 00:09:35.025177 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:35.025314 kubelet[2692]: E0517 00:09:35.025295 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:35.025314 kubelet[2692]: W0517 00:09:35.025302 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:35.025518 kubelet[2692]: E0517 00:09:35.025368 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:35.025729 kubelet[2692]: E0517 00:09:35.025539 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:35.025729 kubelet[2692]: W0517 00:09:35.025547 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:35.025729 kubelet[2692]: E0517 00:09:35.025655 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:35.025729 kubelet[2692]: E0517 00:09:35.025698 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:35.025729 kubelet[2692]: W0517 00:09:35.025704 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:35.026950 kubelet[2692]: E0517 00:09:35.025777 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:35.026950 kubelet[2692]: E0517 00:09:35.025931 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:35.026950 kubelet[2692]: W0517 00:09:35.025939 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:35.026950 kubelet[2692]: E0517 00:09:35.025956 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:35.026950 kubelet[2692]: E0517 00:09:35.026122 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:35.026950 kubelet[2692]: W0517 00:09:35.026130 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:35.026950 kubelet[2692]: E0517 00:09:35.026145 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:35.026950 kubelet[2692]: E0517 00:09:35.026305 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:35.026950 kubelet[2692]: W0517 00:09:35.026313 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:35.026950 kubelet[2692]: E0517 00:09:35.026324 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:35.027186 kubelet[2692]: E0517 00:09:35.026479 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:35.027186 kubelet[2692]: W0517 00:09:35.026488 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:35.027186 kubelet[2692]: E0517 00:09:35.026498 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:35.027186 kubelet[2692]: E0517 00:09:35.026702 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:35.027186 kubelet[2692]: W0517 00:09:35.026711 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:35.027186 kubelet[2692]: E0517 00:09:35.026819 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:35.027186 kubelet[2692]: E0517 00:09:35.026951 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:35.027186 kubelet[2692]: W0517 00:09:35.026959 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:35.027186 kubelet[2692]: E0517 00:09:35.027102 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:35.027186 kubelet[2692]: W0517 00:09:35.027109 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:35.027382 kubelet[2692]: E0517 00:09:35.027117 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:35.027382 kubelet[2692]: E0517 00:09:35.027265 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:35.027382 kubelet[2692]: W0517 00:09:35.027272 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:35.027382 kubelet[2692]: E0517 00:09:35.027279 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:35.029936 kubelet[2692]: E0517 00:09:35.027527 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:35.030041 kubelet[2692]: E0517 00:09:35.029987 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:35.030041 kubelet[2692]: W0517 00:09:35.030004 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:35.030041 kubelet[2692]: E0517 00:09:35.030022 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:35.030388 kubelet[2692]: E0517 00:09:35.030370 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:35.030388 kubelet[2692]: W0517 00:09:35.030384 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:35.030525 kubelet[2692]: E0517 00:09:35.030395 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:35.042848 kubelet[2692]: E0517 00:09:35.042807 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:35.042848 kubelet[2692]: W0517 00:09:35.042836 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:35.042848 kubelet[2692]: E0517 00:09:35.042860 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:35.140839 containerd[1482]: time="2025-05-17T00:09:35.138394169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pqvnr,Uid:3f9baa28-81bf-4803-89f2-f7d2376ef919,Namespace:calico-system,Attempt:0,}" May 17 00:09:35.197160 containerd[1482]: time="2025-05-17T00:09:35.195260618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:35.197160 containerd[1482]: time="2025-05-17T00:09:35.195325819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:35.197160 containerd[1482]: time="2025-05-17T00:09:35.195341459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:35.197160 containerd[1482]: time="2025-05-17T00:09:35.195429300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:35.238092 systemd[1]: Started cri-containerd-70864ca4a6179868fe969caa30a5a5f3889043c8a4458ef7078a093436f1fcbb.scope - libcontainer container 70864ca4a6179868fe969caa30a5a5f3889043c8a4458ef7078a093436f1fcbb. May 17 00:09:35.281050 containerd[1482]: time="2025-05-17T00:09:35.280680614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pqvnr,Uid:3f9baa28-81bf-4803-89f2-f7d2376ef919,Namespace:calico-system,Attempt:0,} returns sandbox id \"70864ca4a6179868fe969caa30a5a5f3889043c8a4458ef7078a093436f1fcbb\"" May 17 00:09:36.096701 kubelet[2692]: E0517 00:09:36.096616 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hhzhx" podUID="35348305-3030-4a37-b275-4a2201a5f384" May 17 00:09:36.231242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2053831072.mount: Deactivated successfully. May 17 00:09:37.399444 containerd[1482]: time="2025-05-17T00:09:37.398972710Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:37.401317 containerd[1482]: time="2025-05-17T00:09:37.400997454Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=33020269" May 17 00:09:37.403244 containerd[1482]: time="2025-05-17T00:09:37.402948117Z" level=info msg="ImageCreate event name:\"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:37.406137 containerd[1482]: time="2025-05-17T00:09:37.406087594Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:37.407166 containerd[1482]: time="2025-05-17T00:09:37.407128966Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"33020123\" in 2.627834171s" May 17 00:09:37.407166 containerd[1482]: time="2025-05-17T00:09:37.407163486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\"" May 17 00:09:37.408597 containerd[1482]: time="2025-05-17T00:09:37.408566503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 00:09:37.431495 containerd[1482]: time="2025-05-17T00:09:37.430371878Z" level=info msg="CreateContainer within sandbox \"a587793d957c86bb5efc490fe545d2220b2390e757a71bd37f828652f1118543\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 17 00:09:37.453574 containerd[1482]: time="2025-05-17T00:09:37.453419149Z" level=info msg="CreateContainer within sandbox \"a587793d957c86bb5efc490fe545d2220b2390e757a71bd37f828652f1118543\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"35beede3af26f15b45cdc4a5ff1876aac2e74389865a78e9f52cda38a0b9380c\"" May 17 00:09:37.455115 containerd[1482]: time="2025-05-17T00:09:37.455085768Z" level=info msg="StartContainer for \"35beede3af26f15b45cdc4a5ff1876aac2e74389865a78e9f52cda38a0b9380c\"" May 17 00:09:37.499111 systemd[1]: Started cri-containerd-35beede3af26f15b45cdc4a5ff1876aac2e74389865a78e9f52cda38a0b9380c.scope - libcontainer container 35beede3af26f15b45cdc4a5ff1876aac2e74389865a78e9f52cda38a0b9380c. May 17 00:09:37.551979 containerd[1482]: time="2025-05-17T00:09:37.551214335Z" level=info msg="StartContainer for \"35beede3af26f15b45cdc4a5ff1876aac2e74389865a78e9f52cda38a0b9380c\" returns successfully" May 17 00:09:38.097326 kubelet[2692]: E0517 00:09:38.097221 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hhzhx" podUID="35348305-3030-4a37-b275-4a2201a5f384" May 17 00:09:38.240026 kubelet[2692]: E0517 00:09:38.239409 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.240026 kubelet[2692]: W0517 00:09:38.239439 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.240026 kubelet[2692]: E0517 00:09:38.239518 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.240026 kubelet[2692]: E0517 00:09:38.239861 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.240026 kubelet[2692]: W0517 00:09:38.239877 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.240026 kubelet[2692]: E0517 00:09:38.239891 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.240899 kubelet[2692]: E0517 00:09:38.240142 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.240899 kubelet[2692]: W0517 00:09:38.240153 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.240899 kubelet[2692]: E0517 00:09:38.240183 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.240899 kubelet[2692]: E0517 00:09:38.240557 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.240899 kubelet[2692]: W0517 00:09:38.240574 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.240899 kubelet[2692]: E0517 00:09:38.240597 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.240899 kubelet[2692]: E0517 00:09:38.240887 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.240899 kubelet[2692]: W0517 00:09:38.240898 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.241095 kubelet[2692]: E0517 00:09:38.240912 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.241485 kubelet[2692]: E0517 00:09:38.241200 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.241485 kubelet[2692]: W0517 00:09:38.241221 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.241485 kubelet[2692]: E0517 00:09:38.241251 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.241485 kubelet[2692]: E0517 00:09:38.241480 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.241608 kubelet[2692]: W0517 00:09:38.241492 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.241608 kubelet[2692]: E0517 00:09:38.241508 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.241899 kubelet[2692]: E0517 00:09:38.241671 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.241899 kubelet[2692]: W0517 00:09:38.241688 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.241899 kubelet[2692]: E0517 00:09:38.241702 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.241989 kubelet[2692]: E0517 00:09:38.241920 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.241989 kubelet[2692]: W0517 00:09:38.241930 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.241989 kubelet[2692]: E0517 00:09:38.241942 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.242365 kubelet[2692]: E0517 00:09:38.242164 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.242365 kubelet[2692]: W0517 00:09:38.242181 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.242365 kubelet[2692]: E0517 00:09:38.242193 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.242492 kubelet[2692]: E0517 00:09:38.242397 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.242492 kubelet[2692]: W0517 00:09:38.242407 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.242492 kubelet[2692]: E0517 00:09:38.242417 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.243831 kubelet[2692]: E0517 00:09:38.242842 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.243831 kubelet[2692]: W0517 00:09:38.242864 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.243831 kubelet[2692]: E0517 00:09:38.242878 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.243831 kubelet[2692]: E0517 00:09:38.243077 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.243831 kubelet[2692]: W0517 00:09:38.243086 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.243831 kubelet[2692]: E0517 00:09:38.243095 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.243831 kubelet[2692]: E0517 00:09:38.243240 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.243831 kubelet[2692]: W0517 00:09:38.243247 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.243831 kubelet[2692]: E0517 00:09:38.243255 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.243831 kubelet[2692]: E0517 00:09:38.243372 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.244446 kubelet[2692]: W0517 00:09:38.243378 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.244446 kubelet[2692]: E0517 00:09:38.243386 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.250817 kubelet[2692]: E0517 00:09:38.250784 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.250817 kubelet[2692]: W0517 00:09:38.250810 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.251038 kubelet[2692]: E0517 00:09:38.250831 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.251218 kubelet[2692]: E0517 00:09:38.251202 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.251250 kubelet[2692]: W0517 00:09:38.251220 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.251333 kubelet[2692]: E0517 00:09:38.251321 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.251600 kubelet[2692]: E0517 00:09:38.251587 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.251600 kubelet[2692]: W0517 00:09:38.251599 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.251682 kubelet[2692]: E0517 00:09:38.251623 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.251913 kubelet[2692]: E0517 00:09:38.251884 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.251913 kubelet[2692]: W0517 00:09:38.251897 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.252032 kubelet[2692]: E0517 00:09:38.251921 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.252222 kubelet[2692]: E0517 00:09:38.252210 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.252286 kubelet[2692]: W0517 00:09:38.252223 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.252329 kubelet[2692]: E0517 00:09:38.252311 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.252437 kubelet[2692]: E0517 00:09:38.252426 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.252437 kubelet[2692]: W0517 00:09:38.252436 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.252656 kubelet[2692]: E0517 00:09:38.252540 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.252656 kubelet[2692]: E0517 00:09:38.252645 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.252656 kubelet[2692]: W0517 00:09:38.252654 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.252755 kubelet[2692]: E0517 00:09:38.252730 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.252858 kubelet[2692]: E0517 00:09:38.252845 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.252858 kubelet[2692]: W0517 00:09:38.252856 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.252931 kubelet[2692]: E0517 00:09:38.252869 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.253072 kubelet[2692]: E0517 00:09:38.253043 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.253072 kubelet[2692]: W0517 00:09:38.253055 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.253126 kubelet[2692]: E0517 00:09:38.253073 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.253320 kubelet[2692]: E0517 00:09:38.253305 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.253320 kubelet[2692]: W0517 00:09:38.253319 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.253392 kubelet[2692]: E0517 00:09:38.253331 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.253626 kubelet[2692]: E0517 00:09:38.253611 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.253668 kubelet[2692]: W0517 00:09:38.253626 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.253668 kubelet[2692]: E0517 00:09:38.253641 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.253855 kubelet[2692]: E0517 00:09:38.253844 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.253884 kubelet[2692]: W0517 00:09:38.253855 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.253884 kubelet[2692]: E0517 00:09:38.253874 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.257485 kubelet[2692]: E0517 00:09:38.257437 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.257587 kubelet[2692]: W0517 00:09:38.257524 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.257587 kubelet[2692]: E0517 00:09:38.257549 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.258227 kubelet[2692]: E0517 00:09:38.258203 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.258227 kubelet[2692]: W0517 00:09:38.258224 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.258428 kubelet[2692]: E0517 00:09:38.258243 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.258603 kubelet[2692]: E0517 00:09:38.258591 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.258603 kubelet[2692]: W0517 00:09:38.258603 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.258740 kubelet[2692]: E0517 00:09:38.258679 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.258840 kubelet[2692]: E0517 00:09:38.258831 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.258877 kubelet[2692]: W0517 00:09:38.258840 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.258877 kubelet[2692]: E0517 00:09:38.258852 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.259049 kubelet[2692]: E0517 00:09:38.259035 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.259049 kubelet[2692]: W0517 00:09:38.259049 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.259114 kubelet[2692]: E0517 00:09:38.259059 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.259355 kubelet[2692]: E0517 00:09:38.259343 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:38.259355 kubelet[2692]: W0517 00:09:38.259354 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:38.259412 kubelet[2692]: E0517 00:09:38.259363 2692 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:38.420368 systemd[1]: run-containerd-runc-k8s.io-35beede3af26f15b45cdc4a5ff1876aac2e74389865a78e9f52cda38a0b9380c-runc.ArC5wy.mount: Deactivated successfully. May 17 00:09:38.748119 containerd[1482]: time="2025-05-17T00:09:38.747986398Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:38.749179 containerd[1482]: time="2025-05-17T00:09:38.749124932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4264304" May 17 00:09:38.750786 containerd[1482]: time="2025-05-17T00:09:38.749984983Z" level=info msg="ImageCreate event name:\"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:38.752216 containerd[1482]: time="2025-05-17T00:09:38.751882486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:38.752738 containerd[1482]: time="2025-05-17T00:09:38.752697456Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5633505\" in 1.344097353s" May 17 00:09:38.752738 containerd[1482]: time="2025-05-17T00:09:38.752735536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\"" May 17 00:09:38.755335 containerd[1482]: time="2025-05-17T00:09:38.755289047Z" level=info msg="CreateContainer within sandbox \"70864ca4a6179868fe969caa30a5a5f3889043c8a4458ef7078a093436f1fcbb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 00:09:38.772725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount444236650.mount: Deactivated successfully. May 17 00:09:38.781182 containerd[1482]: time="2025-05-17T00:09:38.781124843Z" level=info msg="CreateContainer within sandbox \"70864ca4a6179868fe969caa30a5a5f3889043c8a4458ef7078a093436f1fcbb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8ed017d1b05e3adbb8d0edf8bf2beccef6f3452b393eb2d55c298c0aa3ea7ea1\"" May 17 00:09:38.787143 containerd[1482]: time="2025-05-17T00:09:38.782234496Z" level=info msg="StartContainer for \"8ed017d1b05e3adbb8d0edf8bf2beccef6f3452b393eb2d55c298c0aa3ea7ea1\"" May 17 00:09:38.827207 systemd[1]: Started cri-containerd-8ed017d1b05e3adbb8d0edf8bf2beccef6f3452b393eb2d55c298c0aa3ea7ea1.scope - libcontainer container 8ed017d1b05e3adbb8d0edf8bf2beccef6f3452b393eb2d55c298c0aa3ea7ea1. May 17 00:09:38.856165 containerd[1482]: time="2025-05-17T00:09:38.856075117Z" level=info msg="StartContainer for \"8ed017d1b05e3adbb8d0edf8bf2beccef6f3452b393eb2d55c298c0aa3ea7ea1\" returns successfully" May 17 00:09:38.872109 systemd[1]: cri-containerd-8ed017d1b05e3adbb8d0edf8bf2beccef6f3452b393eb2d55c298c0aa3ea7ea1.scope: Deactivated successfully. May 17 00:09:39.006278 containerd[1482]: time="2025-05-17T00:09:39.006069509Z" level=info msg="shim disconnected" id=8ed017d1b05e3adbb8d0edf8bf2beccef6f3452b393eb2d55c298c0aa3ea7ea1 namespace=k8s.io May 17 00:09:39.006278 containerd[1482]: time="2025-05-17T00:09:39.006158550Z" level=warning msg="cleaning up after shim disconnected" id=8ed017d1b05e3adbb8d0edf8bf2beccef6f3452b393eb2d55c298c0aa3ea7ea1 namespace=k8s.io May 17 00:09:39.006278 containerd[1482]: time="2025-05-17T00:09:39.006168710Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:09:39.241174 kubelet[2692]: I0517 00:09:39.239931 2692 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:09:39.242958 containerd[1482]: time="2025-05-17T00:09:39.242913347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 00:09:39.266529 kubelet[2692]: I0517 00:09:39.264683 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7cc9d6d8d-wqzrx" podStartSLOduration=2.634720588 podStartE2EDuration="5.264663422s" podCreationTimestamp="2025-05-17 00:09:34 +0000 UTC" firstStartedPulling="2025-05-17 00:09:34.778486467 +0000 UTC m=+23.824685870" lastFinishedPulling="2025-05-17 00:09:37.408429341 +0000 UTC m=+26.454628704" observedRunningTime="2025-05-17 00:09:38.258137543 +0000 UTC m=+27.304336866" watchObservedRunningTime="2025-05-17 00:09:39.264663422 +0000 UTC m=+28.310862785" May 17 00:09:39.419429 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ed017d1b05e3adbb8d0edf8bf2beccef6f3452b393eb2d55c298c0aa3ea7ea1-rootfs.mount: Deactivated successfully. May 17 00:09:40.097461 kubelet[2692]: E0517 00:09:40.097054 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hhzhx" podUID="35348305-3030-4a37-b275-4a2201a5f384" May 17 00:09:42.096833 kubelet[2692]: E0517 00:09:42.096781 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hhzhx" podUID="35348305-3030-4a37-b275-4a2201a5f384" May 17 00:09:42.787729 containerd[1482]: time="2025-05-17T00:09:42.787681771Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:42.789036 containerd[1482]: time="2025-05-17T00:09:42.788400821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=65748976" May 17 00:09:42.789270 containerd[1482]: time="2025-05-17T00:09:42.789241473Z" level=info msg="ImageCreate event name:\"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:42.791461 containerd[1482]: time="2025-05-17T00:09:42.791415343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:42.792442 containerd[1482]: time="2025-05-17T00:09:42.792404397Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"67118217\" in 3.549444129s" May 17 00:09:42.792442 containerd[1482]: time="2025-05-17T00:09:42.792439478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\"" May 17 00:09:42.795950 containerd[1482]: time="2025-05-17T00:09:42.795910806Z" level=info msg="CreateContainer within sandbox \"70864ca4a6179868fe969caa30a5a5f3889043c8a4458ef7078a093436f1fcbb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 00:09:42.815434 containerd[1482]: time="2025-05-17T00:09:42.815383478Z" level=info msg="CreateContainer within sandbox \"70864ca4a6179868fe969caa30a5a5f3889043c8a4458ef7078a093436f1fcbb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9778e40a9fa7ca1954ee32d17c4f1d15d1b3cc2a9fa41193d0a92ce86b4d2efd\"" May 17 00:09:42.817153 containerd[1482]: time="2025-05-17T00:09:42.816180609Z" level=info msg="StartContainer for \"9778e40a9fa7ca1954ee32d17c4f1d15d1b3cc2a9fa41193d0a92ce86b4d2efd\"" May 17 00:09:42.857094 systemd[1]: Started cri-containerd-9778e40a9fa7ca1954ee32d17c4f1d15d1b3cc2a9fa41193d0a92ce86b4d2efd.scope - libcontainer container 9778e40a9fa7ca1954ee32d17c4f1d15d1b3cc2a9fa41193d0a92ce86b4d2efd. May 17 00:09:42.894922 containerd[1482]: time="2025-05-17T00:09:42.894110697Z" level=info msg="StartContainer for \"9778e40a9fa7ca1954ee32d17c4f1d15d1b3cc2a9fa41193d0a92ce86b4d2efd\" returns successfully" May 17 00:09:43.411618 containerd[1482]: time="2025-05-17T00:09:43.411498927Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:09:43.415039 systemd[1]: cri-containerd-9778e40a9fa7ca1954ee32d17c4f1d15d1b3cc2a9fa41193d0a92ce86b4d2efd.scope: Deactivated successfully. May 17 00:09:43.441441 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9778e40a9fa7ca1954ee32d17c4f1d15d1b3cc2a9fa41193d0a92ce86b4d2efd-rootfs.mount: Deactivated successfully. May 17 00:09:43.497919 kubelet[2692]: I0517 00:09:43.496777 2692 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 00:09:43.541889 containerd[1482]: time="2025-05-17T00:09:43.541796799Z" level=info msg="shim disconnected" id=9778e40a9fa7ca1954ee32d17c4f1d15d1b3cc2a9fa41193d0a92ce86b4d2efd namespace=k8s.io May 17 00:09:43.541889 containerd[1482]: time="2025-05-17T00:09:43.541878200Z" level=warning msg="cleaning up after shim disconnected" id=9778e40a9fa7ca1954ee32d17c4f1d15d1b3cc2a9fa41193d0a92ce86b4d2efd namespace=k8s.io May 17 00:09:43.541889 containerd[1482]: time="2025-05-17T00:09:43.541887120Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:09:43.552922 systemd[1]: Created slice kubepods-burstable-podfd2d0fe1_ef42_47d9_a89f_b98fdf94081f.slice - libcontainer container kubepods-burstable-podfd2d0fe1_ef42_47d9_a89f_b98fdf94081f.slice. May 17 00:09:43.578268 systemd[1]: Created slice kubepods-burstable-pod65842dcb_5f73_429f_9e8d_0092e98ecc3e.slice - libcontainer container kubepods-burstable-pod65842dcb_5f73_429f_9e8d_0092e98ecc3e.slice. May 17 00:09:43.589640 containerd[1482]: time="2025-05-17T00:09:43.589128639Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:09:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 17 00:09:43.602354 systemd[1]: Created slice kubepods-besteffort-podb9fba4a9_b127_4dfe_8a91_d1118962d299.slice - libcontainer container kubepods-besteffort-podb9fba4a9_b127_4dfe_8a91_d1118962d299.slice. May 17 00:09:43.608930 systemd[1]: Created slice kubepods-besteffort-pod5f0c7770_d646_424e_9f31_ff80f202f1a8.slice - libcontainer container kubepods-besteffort-pod5f0c7770_d646_424e_9f31_ff80f202f1a8.slice. May 17 00:09:43.622667 systemd[1]: Created slice kubepods-besteffort-pod96303885_efee_4645_a557_34a808ca80dd.slice - libcontainer container kubepods-besteffort-pod96303885_efee_4645_a557_34a808ca80dd.slice. May 17 00:09:43.632324 systemd[1]: Created slice kubepods-besteffort-podc2a53ade_ef79_46ee_b2b9_636e3cc942be.slice - libcontainer container kubepods-besteffort-podc2a53ade_ef79_46ee_b2b9_636e3cc942be.slice. May 17 00:09:43.642978 systemd[1]: Created slice kubepods-besteffort-pod93f8cf49_d477_4a58_9b7d_7f4a90b44572.slice - libcontainer container kubepods-besteffort-pod93f8cf49_d477_4a58_9b7d_7f4a90b44572.slice. May 17 00:09:43.687904 kubelet[2692]: I0517 00:09:43.686676 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4md4\" (UniqueName: \"kubernetes.io/projected/5f0c7770-d646-424e-9f31-ff80f202f1a8-kube-api-access-b4md4\") pod \"calico-apiserver-75d5dfcff5-cq6gh\" (UID: \"5f0c7770-d646-424e-9f31-ff80f202f1a8\") " pod="calico-apiserver/calico-apiserver-75d5dfcff5-cq6gh" May 17 00:09:43.687904 kubelet[2692]: I0517 00:09:43.686736 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/93f8cf49-d477-4a58-9b7d-7f4a90b44572-whisker-backend-key-pair\") pod \"whisker-7bc8db98b9-gxkgt\" (UID: \"93f8cf49-d477-4a58-9b7d-7f4a90b44572\") " pod="calico-system/whisker-7bc8db98b9-gxkgt" May 17 00:09:43.687904 kubelet[2692]: I0517 00:09:43.686786 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65842dcb-5f73-429f-9e8d-0092e98ecc3e-config-volume\") pod \"coredns-7c65d6cfc9-cvkpr\" (UID: \"65842dcb-5f73-429f-9e8d-0092e98ecc3e\") " pod="kube-system/coredns-7c65d6cfc9-cvkpr" May 17 00:09:43.687904 kubelet[2692]: I0517 00:09:43.686848 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g4wd\" (UniqueName: \"kubernetes.io/projected/65842dcb-5f73-429f-9e8d-0092e98ecc3e-kube-api-access-8g4wd\") pod \"coredns-7c65d6cfc9-cvkpr\" (UID: \"65842dcb-5f73-429f-9e8d-0092e98ecc3e\") " pod="kube-system/coredns-7c65d6cfc9-cvkpr" May 17 00:09:43.687904 kubelet[2692]: I0517 00:09:43.686874 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klg2x\" (UniqueName: \"kubernetes.io/projected/b9fba4a9-b127-4dfe-8a91-d1118962d299-kube-api-access-klg2x\") pod \"calico-kube-controllers-b7fc8b74f-tr4ng\" (UID: \"b9fba4a9-b127-4dfe-8a91-d1118962d299\") " pod="calico-system/calico-kube-controllers-b7fc8b74f-tr4ng" May 17 00:09:43.688137 kubelet[2692]: I0517 00:09:43.686900 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd2d0fe1-ef42-47d9-a89f-b98fdf94081f-config-volume\") pod \"coredns-7c65d6cfc9-r5wvd\" (UID: \"fd2d0fe1-ef42-47d9-a89f-b98fdf94081f\") " pod="kube-system/coredns-7c65d6cfc9-r5wvd" May 17 00:09:43.688137 kubelet[2692]: I0517 00:09:43.686929 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2a53ade-ef79-46ee-b2b9-636e3cc942be-config\") pod \"goldmane-8f77d7b6c-522bq\" (UID: \"c2a53ade-ef79-46ee-b2b9-636e3cc942be\") " pod="calico-system/goldmane-8f77d7b6c-522bq" May 17 00:09:43.688137 kubelet[2692]: I0517 00:09:43.686951 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c2a53ade-ef79-46ee-b2b9-636e3cc942be-goldmane-key-pair\") pod \"goldmane-8f77d7b6c-522bq\" (UID: \"c2a53ade-ef79-46ee-b2b9-636e3cc942be\") " pod="calico-system/goldmane-8f77d7b6c-522bq" May 17 00:09:43.688137 kubelet[2692]: I0517 00:09:43.686974 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9fba4a9-b127-4dfe-8a91-d1118962d299-tigera-ca-bundle\") pod \"calico-kube-controllers-b7fc8b74f-tr4ng\" (UID: \"b9fba4a9-b127-4dfe-8a91-d1118962d299\") " pod="calico-system/calico-kube-controllers-b7fc8b74f-tr4ng" May 17 00:09:43.688137 kubelet[2692]: I0517 00:09:43.686998 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5f0c7770-d646-424e-9f31-ff80f202f1a8-calico-apiserver-certs\") pod \"calico-apiserver-75d5dfcff5-cq6gh\" (UID: \"5f0c7770-d646-424e-9f31-ff80f202f1a8\") " pod="calico-apiserver/calico-apiserver-75d5dfcff5-cq6gh" May 17 00:09:43.688260 kubelet[2692]: I0517 00:09:43.687023 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpdsq\" (UniqueName: \"kubernetes.io/projected/fd2d0fe1-ef42-47d9-a89f-b98fdf94081f-kube-api-access-dpdsq\") pod \"coredns-7c65d6cfc9-r5wvd\" (UID: \"fd2d0fe1-ef42-47d9-a89f-b98fdf94081f\") " pod="kube-system/coredns-7c65d6cfc9-r5wvd" May 17 00:09:43.688260 kubelet[2692]: I0517 00:09:43.687063 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2a53ade-ef79-46ee-b2b9-636e3cc942be-goldmane-ca-bundle\") pod \"goldmane-8f77d7b6c-522bq\" (UID: \"c2a53ade-ef79-46ee-b2b9-636e3cc942be\") " pod="calico-system/goldmane-8f77d7b6c-522bq" May 17 00:09:43.688260 kubelet[2692]: I0517 00:09:43.687095 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttgj2\" (UniqueName: \"kubernetes.io/projected/c2a53ade-ef79-46ee-b2b9-636e3cc942be-kube-api-access-ttgj2\") pod \"goldmane-8f77d7b6c-522bq\" (UID: \"c2a53ade-ef79-46ee-b2b9-636e3cc942be\") " pod="calico-system/goldmane-8f77d7b6c-522bq" May 17 00:09:43.688260 kubelet[2692]: I0517 00:09:43.687119 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp5l5\" (UniqueName: \"kubernetes.io/projected/93f8cf49-d477-4a58-9b7d-7f4a90b44572-kube-api-access-jp5l5\") pod \"whisker-7bc8db98b9-gxkgt\" (UID: \"93f8cf49-d477-4a58-9b7d-7f4a90b44572\") " pod="calico-system/whisker-7bc8db98b9-gxkgt" May 17 00:09:43.688260 kubelet[2692]: I0517 00:09:43.687160 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/96303885-efee-4645-a557-34a808ca80dd-calico-apiserver-certs\") pod \"calico-apiserver-75d5dfcff5-dhnhf\" (UID: \"96303885-efee-4645-a557-34a808ca80dd\") " pod="calico-apiserver/calico-apiserver-75d5dfcff5-dhnhf" May 17 00:09:43.688383 kubelet[2692]: I0517 00:09:43.687186 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6hb4\" (UniqueName: \"kubernetes.io/projected/96303885-efee-4645-a557-34a808ca80dd-kube-api-access-x6hb4\") pod \"calico-apiserver-75d5dfcff5-dhnhf\" (UID: \"96303885-efee-4645-a557-34a808ca80dd\") " pod="calico-apiserver/calico-apiserver-75d5dfcff5-dhnhf" May 17 00:09:43.688383 kubelet[2692]: I0517 00:09:43.687208 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93f8cf49-d477-4a58-9b7d-7f4a90b44572-whisker-ca-bundle\") pod \"whisker-7bc8db98b9-gxkgt\" (UID: \"93f8cf49-d477-4a58-9b7d-7f4a90b44572\") " pod="calico-system/whisker-7bc8db98b9-gxkgt" May 17 00:09:43.870283 containerd[1482]: time="2025-05-17T00:09:43.870119315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r5wvd,Uid:fd2d0fe1-ef42-47d9-a89f-b98fdf94081f,Namespace:kube-system,Attempt:0,}" May 17 00:09:43.886941 containerd[1482]: time="2025-05-17T00:09:43.886557032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-cvkpr,Uid:65842dcb-5f73-429f-9e8d-0092e98ecc3e,Namespace:kube-system,Attempt:0,}" May 17 00:09:43.916101 containerd[1482]: time="2025-05-17T00:09:43.916060936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b7fc8b74f-tr4ng,Uid:b9fba4a9-b127-4dfe-8a91-d1118962d299,Namespace:calico-system,Attempt:0,}" May 17 00:09:43.918077 containerd[1482]: time="2025-05-17T00:09:43.917960083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75d5dfcff5-cq6gh,Uid:5f0c7770-d646-424e-9f31-ff80f202f1a8,Namespace:calico-apiserver,Attempt:0,}" May 17 00:09:43.929050 containerd[1482]: time="2025-05-17T00:09:43.928782878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75d5dfcff5-dhnhf,Uid:96303885-efee-4645-a557-34a808ca80dd,Namespace:calico-apiserver,Attempt:0,}" May 17 00:09:43.938963 containerd[1482]: time="2025-05-17T00:09:43.938851623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-522bq,Uid:c2a53ade-ef79-46ee-b2b9-636e3cc942be,Namespace:calico-system,Attempt:0,}" May 17 00:09:43.947495 containerd[1482]: time="2025-05-17T00:09:43.947359385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bc8db98b9-gxkgt,Uid:93f8cf49-d477-4a58-9b7d-7f4a90b44572,Namespace:calico-system,Attempt:0,}" May 17 00:09:43.982269 containerd[1482]: time="2025-05-17T00:09:43.981973602Z" level=error msg="Failed to destroy network for sandbox \"8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:43.982742 containerd[1482]: time="2025-05-17T00:09:43.982710613Z" level=error msg="encountered an error cleaning up failed sandbox \"8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:43.983263 containerd[1482]: time="2025-05-17T00:09:43.983232901Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r5wvd,Uid:fd2d0fe1-ef42-47d9-a89f-b98fdf94081f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:43.984480 kubelet[2692]: E0517 00:09:43.984432 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:43.984589 kubelet[2692]: E0517 00:09:43.984526 2692 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-r5wvd" May 17 00:09:43.984589 kubelet[2692]: E0517 00:09:43.984546 2692 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-r5wvd" May 17 00:09:43.986365 kubelet[2692]: E0517 00:09:43.986285 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-r5wvd_kube-system(fd2d0fe1-ef42-47d9-a89f-b98fdf94081f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-r5wvd_kube-system(fd2d0fe1-ef42-47d9-a89f-b98fdf94081f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-r5wvd" podUID="fd2d0fe1-ef42-47d9-a89f-b98fdf94081f" May 17 00:09:44.041779 containerd[1482]: time="2025-05-17T00:09:44.041534354Z" level=error msg="Failed to destroy network for sandbox \"5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.042439 containerd[1482]: time="2025-05-17T00:09:44.041929760Z" level=error msg="encountered an error cleaning up failed sandbox \"5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.042439 containerd[1482]: time="2025-05-17T00:09:44.041991521Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-cvkpr,Uid:65842dcb-5f73-429f-9e8d-0092e98ecc3e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.042555 kubelet[2692]: E0517 00:09:44.042311 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.042555 kubelet[2692]: E0517 00:09:44.042373 2692 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-cvkpr" May 17 00:09:44.042555 kubelet[2692]: E0517 00:09:44.042394 2692 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-cvkpr" May 17 00:09:44.042674 kubelet[2692]: E0517 00:09:44.042438 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-cvkpr_kube-system(65842dcb-5f73-429f-9e8d-0092e98ecc3e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-cvkpr_kube-system(65842dcb-5f73-429f-9e8d-0092e98ecc3e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-cvkpr" podUID="65842dcb-5f73-429f-9e8d-0092e98ecc3e" May 17 00:09:44.105699 systemd[1]: Created slice kubepods-besteffort-pod35348305_3030_4a37_b275_4a2201a5f384.slice - libcontainer container kubepods-besteffort-pod35348305_3030_4a37_b275_4a2201a5f384.slice. May 17 00:09:44.109668 containerd[1482]: time="2025-05-17T00:09:44.109607719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hhzhx,Uid:35348305-3030-4a37-b275-4a2201a5f384,Namespace:calico-system,Attempt:0,}" May 17 00:09:44.122283 containerd[1482]: time="2025-05-17T00:09:44.122126384Z" level=error msg="Failed to destroy network for sandbox \"e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.123056 containerd[1482]: time="2025-05-17T00:09:44.123007797Z" level=error msg="encountered an error cleaning up failed sandbox \"e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.123144 containerd[1482]: time="2025-05-17T00:09:44.123076278Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b7fc8b74f-tr4ng,Uid:b9fba4a9-b127-4dfe-8a91-d1118962d299,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.123582 kubelet[2692]: E0517 00:09:44.123380 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.123582 kubelet[2692]: E0517 00:09:44.123445 2692 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b7fc8b74f-tr4ng" May 17 00:09:44.123582 kubelet[2692]: E0517 00:09:44.123463 2692 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b7fc8b74f-tr4ng" May 17 00:09:44.124532 kubelet[2692]: E0517 00:09:44.123504 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b7fc8b74f-tr4ng_calico-system(b9fba4a9-b127-4dfe-8a91-d1118962d299)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b7fc8b74f-tr4ng_calico-system(b9fba4a9-b127-4dfe-8a91-d1118962d299)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b7fc8b74f-tr4ng" podUID="b9fba4a9-b127-4dfe-8a91-d1118962d299" May 17 00:09:44.153820 containerd[1482]: time="2025-05-17T00:09:44.153703410Z" level=error msg="Failed to destroy network for sandbox \"03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.154354 containerd[1482]: time="2025-05-17T00:09:44.154311179Z" level=error msg="encountered an error cleaning up failed sandbox \"03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.154422 containerd[1482]: time="2025-05-17T00:09:44.154385460Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bc8db98b9-gxkgt,Uid:93f8cf49-d477-4a58-9b7d-7f4a90b44572,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.154680 kubelet[2692]: E0517 00:09:44.154637 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.154831 kubelet[2692]: E0517 00:09:44.154706 2692 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bc8db98b9-gxkgt" May 17 00:09:44.154831 kubelet[2692]: E0517 00:09:44.154736 2692 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bc8db98b9-gxkgt" May 17 00:09:44.155014 kubelet[2692]: E0517 00:09:44.154976 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7bc8db98b9-gxkgt_calico-system(93f8cf49-d477-4a58-9b7d-7f4a90b44572)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7bc8db98b9-gxkgt_calico-system(93f8cf49-d477-4a58-9b7d-7f4a90b44572)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7bc8db98b9-gxkgt" podUID="93f8cf49-d477-4a58-9b7d-7f4a90b44572" May 17 00:09:44.163752 containerd[1482]: time="2025-05-17T00:09:44.163696997Z" level=error msg="Failed to destroy network for sandbox \"2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.166025 containerd[1482]: time="2025-05-17T00:09:44.165814749Z" level=error msg="encountered an error cleaning up failed sandbox \"2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.166025 containerd[1482]: time="2025-05-17T00:09:44.165921390Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75d5dfcff5-cq6gh,Uid:5f0c7770-d646-424e-9f31-ff80f202f1a8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.166799 kubelet[2692]: E0517 00:09:44.166185 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.166799 kubelet[2692]: E0517 00:09:44.166272 2692 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75d5dfcff5-cq6gh" May 17 00:09:44.166799 kubelet[2692]: E0517 00:09:44.166321 2692 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75d5dfcff5-cq6gh" May 17 00:09:44.168035 kubelet[2692]: E0517 00:09:44.166396 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75d5dfcff5-cq6gh_calico-apiserver(5f0c7770-d646-424e-9f31-ff80f202f1a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75d5dfcff5-cq6gh_calico-apiserver(5f0c7770-d646-424e-9f31-ff80f202f1a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75d5dfcff5-cq6gh" podUID="5f0c7770-d646-424e-9f31-ff80f202f1a8" May 17 00:09:44.174774 containerd[1482]: time="2025-05-17T00:09:44.174713840Z" level=error msg="Failed to destroy network for sandbox \"1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.176188 containerd[1482]: time="2025-05-17T00:09:44.176141541Z" level=error msg="encountered an error cleaning up failed sandbox \"1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.176402 containerd[1482]: time="2025-05-17T00:09:44.176376185Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75d5dfcff5-dhnhf,Uid:96303885-efee-4645-a557-34a808ca80dd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.176868 kubelet[2692]: E0517 00:09:44.176827 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.176946 kubelet[2692]: E0517 00:09:44.176903 2692 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75d5dfcff5-dhnhf" May 17 00:09:44.176946 kubelet[2692]: E0517 00:09:44.176924 2692 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75d5dfcff5-dhnhf" May 17 00:09:44.177275 kubelet[2692]: E0517 00:09:44.177233 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75d5dfcff5-dhnhf_calico-apiserver(96303885-efee-4645-a557-34a808ca80dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75d5dfcff5-dhnhf_calico-apiserver(96303885-efee-4645-a557-34a808ca80dd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75d5dfcff5-dhnhf" podUID="96303885-efee-4645-a557-34a808ca80dd" May 17 00:09:44.190356 containerd[1482]: time="2025-05-17T00:09:44.189810343Z" level=error msg="Failed to destroy network for sandbox \"5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.191779 containerd[1482]: time="2025-05-17T00:09:44.191654570Z" level=error msg="encountered an error cleaning up failed sandbox \"5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.192047 containerd[1482]: time="2025-05-17T00:09:44.191741971Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-522bq,Uid:c2a53ade-ef79-46ee-b2b9-636e3cc942be,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.192108 kubelet[2692]: E0517 00:09:44.192031 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.192108 kubelet[2692]: E0517 00:09:44.192090 2692 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-522bq" May 17 00:09:44.193033 kubelet[2692]: E0517 00:09:44.192107 2692 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-522bq" May 17 00:09:44.193033 kubelet[2692]: E0517 00:09:44.192160 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-8f77d7b6c-522bq_calico-system(c2a53ade-ef79-46ee-b2b9-636e3cc942be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-8f77d7b6c-522bq_calico-system(c2a53ade-ef79-46ee-b2b9-636e3cc942be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-522bq" podUID="c2a53ade-ef79-46ee-b2b9-636e3cc942be" May 17 00:09:44.232244 containerd[1482]: time="2025-05-17T00:09:44.232178968Z" level=error msg="Failed to destroy network for sandbox \"b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.233071 containerd[1482]: time="2025-05-17T00:09:44.232875779Z" level=error msg="encountered an error cleaning up failed sandbox \"b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.233071 containerd[1482]: time="2025-05-17T00:09:44.232960580Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hhzhx,Uid:35348305-3030-4a37-b275-4a2201a5f384,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.233324 kubelet[2692]: E0517 00:09:44.233253 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.233384 kubelet[2692]: E0517 00:09:44.233337 2692 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hhzhx" May 17 00:09:44.233384 kubelet[2692]: E0517 00:09:44.233364 2692 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hhzhx" May 17 00:09:44.233533 kubelet[2692]: E0517 00:09:44.233468 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hhzhx_calico-system(35348305-3030-4a37-b275-4a2201a5f384)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hhzhx_calico-system(35348305-3030-4a37-b275-4a2201a5f384)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hhzhx" podUID="35348305-3030-4a37-b275-4a2201a5f384" May 17 00:09:44.258261 kubelet[2692]: I0517 00:09:44.258026 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" May 17 00:09:44.259605 kubelet[2692]: I0517 00:09:44.259139 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" May 17 00:09:44.260070 containerd[1482]: time="2025-05-17T00:09:44.260037059Z" level=info msg="StopPodSandbox for \"5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af\"" May 17 00:09:44.261101 containerd[1482]: time="2025-05-17T00:09:44.261001234Z" level=info msg="Ensure that sandbox 5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af in task-service has been cleanup successfully" May 17 00:09:44.261332 containerd[1482]: time="2025-05-17T00:09:44.260201382Z" level=info msg="StopPodSandbox for \"e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830\"" May 17 00:09:44.261880 containerd[1482]: time="2025-05-17T00:09:44.261854886Z" level=info msg="Ensure that sandbox e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830 in task-service has been cleanup successfully" May 17 00:09:44.264356 kubelet[2692]: I0517 00:09:44.263968 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" May 17 00:09:44.266417 containerd[1482]: time="2025-05-17T00:09:44.265383658Z" level=info msg="StopPodSandbox for \"2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8\"" May 17 00:09:44.267724 containerd[1482]: time="2025-05-17T00:09:44.267556330Z" level=info msg="Ensure that sandbox 2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8 in task-service has been cleanup successfully" May 17 00:09:44.273825 containerd[1482]: time="2025-05-17T00:09:44.273569699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 00:09:44.276209 kubelet[2692]: I0517 00:09:44.276154 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" May 17 00:09:44.277978 containerd[1482]: time="2025-05-17T00:09:44.276932309Z" level=info msg="StopPodSandbox for \"03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1\"" May 17 00:09:44.277978 containerd[1482]: time="2025-05-17T00:09:44.277194633Z" level=info msg="Ensure that sandbox 03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1 in task-service has been cleanup successfully" May 17 00:09:44.284135 kubelet[2692]: I0517 00:09:44.283984 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" May 17 00:09:44.287285 containerd[1482]: time="2025-05-17T00:09:44.287242301Z" level=info msg="StopPodSandbox for \"b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b\"" May 17 00:09:44.287943 containerd[1482]: time="2025-05-17T00:09:44.287906871Z" level=info msg="Ensure that sandbox b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b in task-service has been cleanup successfully" May 17 00:09:44.308432 kubelet[2692]: I0517 00:09:44.308401 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" May 17 00:09:44.315396 containerd[1482]: time="2025-05-17T00:09:44.315293555Z" level=info msg="StopPodSandbox for \"5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c\"" May 17 00:09:44.315724 containerd[1482]: time="2025-05-17T00:09:44.315560759Z" level=info msg="Ensure that sandbox 5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c in task-service has been cleanup successfully" May 17 00:09:44.339210 kubelet[2692]: I0517 00:09:44.338933 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" May 17 00:09:44.342413 containerd[1482]: time="2025-05-17T00:09:44.342225873Z" level=info msg="StopPodSandbox for \"8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666\"" May 17 00:09:44.342563 containerd[1482]: time="2025-05-17T00:09:44.342491157Z" level=info msg="Ensure that sandbox 8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666 in task-service has been cleanup successfully" May 17 00:09:44.355563 kubelet[2692]: I0517 00:09:44.354938 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" May 17 00:09:44.356429 containerd[1482]: time="2025-05-17T00:09:44.356396282Z" level=info msg="StopPodSandbox for \"1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753\"" May 17 00:09:44.356948 containerd[1482]: time="2025-05-17T00:09:44.356707926Z" level=info msg="Ensure that sandbox 1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753 in task-service has been cleanup successfully" May 17 00:09:44.391682 containerd[1482]: time="2025-05-17T00:09:44.391625442Z" level=error msg="StopPodSandbox for \"2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8\" failed" error="failed to destroy network for sandbox \"2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.396181 kubelet[2692]: E0517 00:09:44.396085 2692 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" May 17 00:09:44.397114 kubelet[2692]: E0517 00:09:44.396614 2692 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8"} May 17 00:09:44.397114 kubelet[2692]: E0517 00:09:44.396822 2692 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5f0c7770-d646-424e-9f31-ff80f202f1a8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:09:44.397114 kubelet[2692]: E0517 00:09:44.396844 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5f0c7770-d646-424e-9f31-ff80f202f1a8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75d5dfcff5-cq6gh" podUID="5f0c7770-d646-424e-9f31-ff80f202f1a8" May 17 00:09:44.405857 containerd[1482]: time="2025-05-17T00:09:44.405508487Z" level=error msg="StopPodSandbox for \"e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830\" failed" error="failed to destroy network for sandbox \"e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.406796 kubelet[2692]: E0517 00:09:44.406428 2692 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" May 17 00:09:44.406796 kubelet[2692]: E0517 00:09:44.406620 2692 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830"} May 17 00:09:44.407141 kubelet[2692]: E0517 00:09:44.407003 2692 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b9fba4a9-b127-4dfe-8a91-d1118962d299\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:09:44.407466 kubelet[2692]: E0517 00:09:44.407033 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b9fba4a9-b127-4dfe-8a91-d1118962d299\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b7fc8b74f-tr4ng" podUID="b9fba4a9-b127-4dfe-8a91-d1118962d299" May 17 00:09:44.422627 containerd[1482]: time="2025-05-17T00:09:44.422416776Z" level=error msg="StopPodSandbox for \"5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af\" failed" error="failed to destroy network for sandbox \"5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.422901 kubelet[2692]: E0517 00:09:44.422693 2692 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" May 17 00:09:44.422901 kubelet[2692]: E0517 00:09:44.422742 2692 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af"} May 17 00:09:44.422901 kubelet[2692]: E0517 00:09:44.422797 2692 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c2a53ade-ef79-46ee-b2b9-636e3cc942be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:09:44.422901 kubelet[2692]: E0517 00:09:44.422819 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c2a53ade-ef79-46ee-b2b9-636e3cc942be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-522bq" podUID="c2a53ade-ef79-46ee-b2b9-636e3cc942be" May 17 00:09:44.427813 containerd[1482]: time="2025-05-17T00:09:44.427474251Z" level=error msg="StopPodSandbox for \"03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1\" failed" error="failed to destroy network for sandbox \"03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.428316 kubelet[2692]: E0517 00:09:44.428058 2692 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" May 17 00:09:44.428316 kubelet[2692]: E0517 00:09:44.428108 2692 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1"} May 17 00:09:44.428316 kubelet[2692]: E0517 00:09:44.428249 2692 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"93f8cf49-d477-4a58-9b7d-7f4a90b44572\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:09:44.428316 kubelet[2692]: E0517 00:09:44.428272 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"93f8cf49-d477-4a58-9b7d-7f4a90b44572\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7bc8db98b9-gxkgt" podUID="93f8cf49-d477-4a58-9b7d-7f4a90b44572" May 17 00:09:44.452891 containerd[1482]: time="2025-05-17T00:09:44.452537901Z" level=error msg="StopPodSandbox for \"8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666\" failed" error="failed to destroy network for sandbox \"8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.453208 kubelet[2692]: E0517 00:09:44.453038 2692 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" May 17 00:09:44.453208 kubelet[2692]: E0517 00:09:44.453104 2692 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666"} May 17 00:09:44.453208 kubelet[2692]: E0517 00:09:44.453145 2692 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fd2d0fe1-ef42-47d9-a89f-b98fdf94081f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:09:44.453208 kubelet[2692]: E0517 00:09:44.453177 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fd2d0fe1-ef42-47d9-a89f-b98fdf94081f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-r5wvd" podUID="fd2d0fe1-ef42-47d9-a89f-b98fdf94081f" May 17 00:09:44.456462 containerd[1482]: time="2025-05-17T00:09:44.456245996Z" level=error msg="StopPodSandbox for \"5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c\" failed" error="failed to destroy network for sandbox \"5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.456462 containerd[1482]: time="2025-05-17T00:09:44.456246036Z" level=error msg="StopPodSandbox for \"b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b\" failed" error="failed to destroy network for sandbox \"b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.456647 kubelet[2692]: E0517 00:09:44.456590 2692 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" May 17 00:09:44.456928 kubelet[2692]: E0517 00:09:44.456740 2692 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" May 17 00:09:44.456928 kubelet[2692]: E0517 00:09:44.456810 2692 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c"} May 17 00:09:44.456928 kubelet[2692]: E0517 00:09:44.456825 2692 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b"} May 17 00:09:44.456928 kubelet[2692]: E0517 00:09:44.456841 2692 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"65842dcb-5f73-429f-9e8d-0092e98ecc3e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:09:44.456928 kubelet[2692]: E0517 00:09:44.456864 2692 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"35348305-3030-4a37-b275-4a2201a5f384\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:09:44.457189 kubelet[2692]: E0517 00:09:44.456866 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"65842dcb-5f73-429f-9e8d-0092e98ecc3e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-cvkpr" podUID="65842dcb-5f73-429f-9e8d-0092e98ecc3e" May 17 00:09:44.457189 kubelet[2692]: E0517 00:09:44.456886 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"35348305-3030-4a37-b275-4a2201a5f384\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hhzhx" podUID="35348305-3030-4a37-b275-4a2201a5f384" May 17 00:09:44.469899 containerd[1482]: time="2025-05-17T00:09:44.469834236Z" level=error msg="StopPodSandbox for \"1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753\" failed" error="failed to destroy network for sandbox \"1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.470621 kubelet[2692]: E0517 00:09:44.470356 2692 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" May 17 00:09:44.470621 kubelet[2692]: E0517 00:09:44.470412 2692 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753"} May 17 00:09:44.470621 kubelet[2692]: E0517 00:09:44.470452 2692 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"96303885-efee-4645-a557-34a808ca80dd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:09:44.470621 kubelet[2692]: E0517 00:09:44.470479 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"96303885-efee-4645-a557-34a808ca80dd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75d5dfcff5-dhnhf" podUID="96303885-efee-4645-a557-34a808ca80dd" May 17 00:09:46.175569 kubelet[2692]: I0517 00:09:46.175088 2692 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:09:51.241564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3984241517.mount: Deactivated successfully. May 17 00:09:51.275811 containerd[1482]: time="2025-05-17T00:09:51.273956103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:51.275811 containerd[1482]: time="2025-05-17T00:09:51.275457689Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=150465379" May 17 00:09:51.275811 containerd[1482]: time="2025-05-17T00:09:51.275688533Z" level=info msg="ImageCreate event name:\"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:51.280019 containerd[1482]: time="2025-05-17T00:09:51.279948486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:51.283276 containerd[1482]: time="2025-05-17T00:09:51.283219023Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"150465241\" in 7.009583602s" May 17 00:09:51.283472 containerd[1482]: time="2025-05-17T00:09:51.283453227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\"" May 17 00:09:51.316520 containerd[1482]: time="2025-05-17T00:09:51.316480275Z" level=info msg="CreateContainer within sandbox \"70864ca4a6179868fe969caa30a5a5f3889043c8a4458ef7078a093436f1fcbb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 00:09:51.339380 containerd[1482]: time="2025-05-17T00:09:51.339333628Z" level=info msg="CreateContainer within sandbox \"70864ca4a6179868fe969caa30a5a5f3889043c8a4458ef7078a093436f1fcbb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bf0b7430691e09f5e3c20f95541c3cd5f64cc64772fae6ae78735c168d23caf6\"" May 17 00:09:51.345007 containerd[1482]: time="2025-05-17T00:09:51.344885563Z" level=info msg="StartContainer for \"bf0b7430691e09f5e3c20f95541c3cd5f64cc64772fae6ae78735c168d23caf6\"" May 17 00:09:51.385037 systemd[1]: Started cri-containerd-bf0b7430691e09f5e3c20f95541c3cd5f64cc64772fae6ae78735c168d23caf6.scope - libcontainer container bf0b7430691e09f5e3c20f95541c3cd5f64cc64772fae6ae78735c168d23caf6. May 17 00:09:51.423438 containerd[1482]: time="2025-05-17T00:09:51.423395553Z" level=info msg="StartContainer for \"bf0b7430691e09f5e3c20f95541c3cd5f64cc64772fae6ae78735c168d23caf6\" returns successfully" May 17 00:09:51.564453 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 00:09:51.565501 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 00:09:51.680035 containerd[1482]: time="2025-05-17T00:09:51.679987725Z" level=info msg="StopPodSandbox for \"03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1\"" May 17 00:09:51.875079 containerd[1482]: 2025-05-17 00:09:51.789 [INFO][3884] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" May 17 00:09:51.875079 containerd[1482]: 2025-05-17 00:09:51.790 [INFO][3884] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" iface="eth0" netns="/var/run/netns/cni-f95fc7ed-0cb7-2c82-8749-04308d033441" May 17 00:09:51.875079 containerd[1482]: 2025-05-17 00:09:51.790 [INFO][3884] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" iface="eth0" netns="/var/run/netns/cni-f95fc7ed-0cb7-2c82-8749-04308d033441" May 17 00:09:51.875079 containerd[1482]: 2025-05-17 00:09:51.790 [INFO][3884] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" iface="eth0" netns="/var/run/netns/cni-f95fc7ed-0cb7-2c82-8749-04308d033441" May 17 00:09:51.875079 containerd[1482]: 2025-05-17 00:09:51.790 [INFO][3884] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" May 17 00:09:51.875079 containerd[1482]: 2025-05-17 00:09:51.790 [INFO][3884] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" May 17 00:09:51.875079 containerd[1482]: 2025-05-17 00:09:51.845 [INFO][3897] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" HandleID="k8s-pod-network.03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" Workload="ci--4081--3--3--n--e61ddff57a-k8s-whisker--7bc8db98b9--gxkgt-eth0" May 17 00:09:51.875079 containerd[1482]: 2025-05-17 00:09:51.846 [INFO][3897] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:09:51.875079 containerd[1482]: 2025-05-17 00:09:51.846 [INFO][3897] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:09:51.875079 containerd[1482]: 2025-05-17 00:09:51.860 [WARNING][3897] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" HandleID="k8s-pod-network.03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" Workload="ci--4081--3--3--n--e61ddff57a-k8s-whisker--7bc8db98b9--gxkgt-eth0" May 17 00:09:51.875079 containerd[1482]: 2025-05-17 00:09:51.860 [INFO][3897] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" HandleID="k8s-pod-network.03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" Workload="ci--4081--3--3--n--e61ddff57a-k8s-whisker--7bc8db98b9--gxkgt-eth0" May 17 00:09:51.875079 containerd[1482]: 2025-05-17 00:09:51.868 [INFO][3897] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:09:51.875079 containerd[1482]: 2025-05-17 00:09:51.872 [INFO][3884] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" May 17 00:09:51.877298 containerd[1482]: time="2025-05-17T00:09:51.875700210Z" level=info msg="TearDown network for sandbox \"03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1\" successfully" May 17 00:09:51.877298 containerd[1482]: time="2025-05-17T00:09:51.875735371Z" level=info msg="StopPodSandbox for \"03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1\" returns successfully" May 17 00:09:51.961038 kubelet[2692]: I0517 00:09:51.960122 2692 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jp5l5\" (UniqueName: \"kubernetes.io/projected/93f8cf49-d477-4a58-9b7d-7f4a90b44572-kube-api-access-jp5l5\") pod \"93f8cf49-d477-4a58-9b7d-7f4a90b44572\" (UID: \"93f8cf49-d477-4a58-9b7d-7f4a90b44572\") " May 17 00:09:51.961038 kubelet[2692]: I0517 00:09:51.960202 2692 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93f8cf49-d477-4a58-9b7d-7f4a90b44572-whisker-ca-bundle\") pod \"93f8cf49-d477-4a58-9b7d-7f4a90b44572\" (UID: \"93f8cf49-d477-4a58-9b7d-7f4a90b44572\") " May 17 00:09:51.961038 kubelet[2692]: I0517 00:09:51.960241 2692 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/93f8cf49-d477-4a58-9b7d-7f4a90b44572-whisker-backend-key-pair\") pod \"93f8cf49-d477-4a58-9b7d-7f4a90b44572\" (UID: \"93f8cf49-d477-4a58-9b7d-7f4a90b44572\") " May 17 00:09:51.967800 kubelet[2692]: I0517 00:09:51.967323 2692 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93f8cf49-d477-4a58-9b7d-7f4a90b44572-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "93f8cf49-d477-4a58-9b7d-7f4a90b44572" (UID: "93f8cf49-d477-4a58-9b7d-7f4a90b44572"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:09:51.967800 kubelet[2692]: I0517 00:09:51.967722 2692 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93f8cf49-d477-4a58-9b7d-7f4a90b44572-kube-api-access-jp5l5" (OuterVolumeSpecName: "kube-api-access-jp5l5") pod "93f8cf49-d477-4a58-9b7d-7f4a90b44572" (UID: "93f8cf49-d477-4a58-9b7d-7f4a90b44572"). InnerVolumeSpecName "kube-api-access-jp5l5". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:09:51.968452 kubelet[2692]: I0517 00:09:51.968423 2692 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93f8cf49-d477-4a58-9b7d-7f4a90b44572-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "93f8cf49-d477-4a58-9b7d-7f4a90b44572" (UID: "93f8cf49-d477-4a58-9b7d-7f4a90b44572"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:09:52.060784 kubelet[2692]: I0517 00:09:52.060706 2692 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/93f8cf49-d477-4a58-9b7d-7f4a90b44572-whisker-backend-key-pair\") on node \"ci-4081-3-3-n-e61ddff57a\" DevicePath \"\"" May 17 00:09:52.061005 kubelet[2692]: I0517 00:09:52.060809 2692 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jp5l5\" (UniqueName: \"kubernetes.io/projected/93f8cf49-d477-4a58-9b7d-7f4a90b44572-kube-api-access-jp5l5\") on node \"ci-4081-3-3-n-e61ddff57a\" DevicePath \"\"" May 17 00:09:52.061005 kubelet[2692]: I0517 00:09:52.060835 2692 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93f8cf49-d477-4a58-9b7d-7f4a90b44572-whisker-ca-bundle\") on node \"ci-4081-3-3-n-e61ddff57a\" DevicePath \"\"" May 17 00:09:52.243611 systemd[1]: run-netns-cni\x2df95fc7ed\x2d0cb7\x2d2c82\x2d8749\x2d04308d033441.mount: Deactivated successfully. May 17 00:09:52.244014 systemd[1]: var-lib-kubelet-pods-93f8cf49\x2dd477\x2d4a58\x2d9b7d\x2d7f4a90b44572-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djp5l5.mount: Deactivated successfully. May 17 00:09:52.246479 systemd[1]: var-lib-kubelet-pods-93f8cf49\x2dd477\x2d4a58\x2d9b7d\x2d7f4a90b44572-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 17 00:09:52.400896 systemd[1]: Removed slice kubepods-besteffort-pod93f8cf49_d477_4a58_9b7d_7f4a90b44572.slice - libcontainer container kubepods-besteffort-pod93f8cf49_d477_4a58_9b7d_7f4a90b44572.slice. May 17 00:09:52.454324 kubelet[2692]: I0517 00:09:52.454244 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pqvnr" podStartSLOduration=2.441893653 podStartE2EDuration="18.454222456s" podCreationTimestamp="2025-05-17 00:09:34 +0000 UTC" firstStartedPulling="2025-05-17 00:09:35.284684257 +0000 UTC m=+24.330883620" lastFinishedPulling="2025-05-17 00:09:51.29700578 +0000 UTC m=+40.343212423" observedRunningTime="2025-05-17 00:09:52.42990139 +0000 UTC m=+41.476100833" watchObservedRunningTime="2025-05-17 00:09:52.454222456 +0000 UTC m=+41.500421819" May 17 00:09:52.478650 systemd[1]: run-containerd-runc-k8s.io-bf0b7430691e09f5e3c20f95541c3cd5f64cc64772fae6ae78735c168d23caf6-runc.5f8LOw.mount: Deactivated successfully. May 17 00:09:52.525519 systemd[1]: Created slice kubepods-besteffort-pod0d2cf33f_bbcd_48f5_a11a_d546875d4c3f.slice - libcontainer container kubepods-besteffort-pod0d2cf33f_bbcd_48f5_a11a_d546875d4c3f.slice. May 17 00:09:52.565252 kubelet[2692]: I0517 00:09:52.565201 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0d2cf33f-bbcd-48f5-a11a-d546875d4c3f-whisker-backend-key-pair\") pod \"whisker-66d874b469-vd68m\" (UID: \"0d2cf33f-bbcd-48f5-a11a-d546875d4c3f\") " pod="calico-system/whisker-66d874b469-vd68m" May 17 00:09:52.565252 kubelet[2692]: I0517 00:09:52.565259 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdj9s\" (UniqueName: \"kubernetes.io/projected/0d2cf33f-bbcd-48f5-a11a-d546875d4c3f-kube-api-access-xdj9s\") pod \"whisker-66d874b469-vd68m\" (UID: \"0d2cf33f-bbcd-48f5-a11a-d546875d4c3f\") " pod="calico-system/whisker-66d874b469-vd68m" May 17 00:09:52.565252 kubelet[2692]: I0517 00:09:52.565279 2692 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d2cf33f-bbcd-48f5-a11a-d546875d4c3f-whisker-ca-bundle\") pod \"whisker-66d874b469-vd68m\" (UID: \"0d2cf33f-bbcd-48f5-a11a-d546875d4c3f\") " pod="calico-system/whisker-66d874b469-vd68m" May 17 00:09:52.833542 containerd[1482]: time="2025-05-17T00:09:52.832899963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66d874b469-vd68m,Uid:0d2cf33f-bbcd-48f5-a11a-d546875d4c3f,Namespace:calico-system,Attempt:0,}" May 17 00:09:52.986953 systemd-networkd[1384]: calidb98b3cafa6: Link UP May 17 00:09:52.987667 systemd-networkd[1384]: calidb98b3cafa6: Gained carrier May 17 00:09:53.004021 containerd[1482]: 2025-05-17 00:09:52.879 [INFO][3944] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:09:53.004021 containerd[1482]: 2025-05-17 00:09:52.898 [INFO][3944] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--e61ddff57a-k8s-whisker--66d874b469--vd68m-eth0 whisker-66d874b469- calico-system 0d2cf33f-bbcd-48f5-a11a-d546875d4c3f 881 0 2025-05-17 00:09:52 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:66d874b469 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-3-n-e61ddff57a whisker-66d874b469-vd68m eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calidb98b3cafa6 [] [] }} ContainerID="60da86769fca948366d5cb06ddaab0f41825db06468e7e26879a5c0d72156b38" Namespace="calico-system" Pod="whisker-66d874b469-vd68m" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-whisker--66d874b469--vd68m-" May 17 00:09:53.004021 containerd[1482]: 2025-05-17 00:09:52.899 [INFO][3944] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="60da86769fca948366d5cb06ddaab0f41825db06468e7e26879a5c0d72156b38" Namespace="calico-system" Pod="whisker-66d874b469-vd68m" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-whisker--66d874b469--vd68m-eth0" May 17 00:09:53.004021 containerd[1482]: 2025-05-17 00:09:52.929 [INFO][3956] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60da86769fca948366d5cb06ddaab0f41825db06468e7e26879a5c0d72156b38" HandleID="k8s-pod-network.60da86769fca948366d5cb06ddaab0f41825db06468e7e26879a5c0d72156b38" Workload="ci--4081--3--3--n--e61ddff57a-k8s-whisker--66d874b469--vd68m-eth0" May 17 00:09:53.004021 containerd[1482]: 2025-05-17 00:09:52.929 [INFO][3956] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="60da86769fca948366d5cb06ddaab0f41825db06468e7e26879a5c0d72156b38" HandleID="k8s-pod-network.60da86769fca948366d5cb06ddaab0f41825db06468e7e26879a5c0d72156b38" Workload="ci--4081--3--3--n--e61ddff57a-k8s-whisker--66d874b469--vd68m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400022f070), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-e61ddff57a", "pod":"whisker-66d874b469-vd68m", "timestamp":"2025-05-17 00:09:52.929661216 +0000 UTC"}, Hostname:"ci-4081-3-3-n-e61ddff57a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:09:53.004021 containerd[1482]: 2025-05-17 00:09:52.929 [INFO][3956] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:09:53.004021 containerd[1482]: 2025-05-17 00:09:52.929 [INFO][3956] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:09:53.004021 containerd[1482]: 2025-05-17 00:09:52.929 [INFO][3956] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-e61ddff57a' May 17 00:09:53.004021 containerd[1482]: 2025-05-17 00:09:52.940 [INFO][3956] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.60da86769fca948366d5cb06ddaab0f41825db06468e7e26879a5c0d72156b38" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:53.004021 containerd[1482]: 2025-05-17 00:09:52.947 [INFO][3956] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:53.004021 containerd[1482]: 2025-05-17 00:09:52.952 [INFO][3956] ipam/ipam.go 511: Trying affinity for 192.168.22.0/26 host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:53.004021 containerd[1482]: 2025-05-17 00:09:52.954 [INFO][3956] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.0/26 host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:53.004021 containerd[1482]: 2025-05-17 00:09:52.957 [INFO][3956] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.0/26 host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:53.004021 containerd[1482]: 2025-05-17 00:09:52.958 [INFO][3956] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.22.0/26 handle="k8s-pod-network.60da86769fca948366d5cb06ddaab0f41825db06468e7e26879a5c0d72156b38" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:53.004021 containerd[1482]: 2025-05-17 00:09:52.960 [INFO][3956] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.60da86769fca948366d5cb06ddaab0f41825db06468e7e26879a5c0d72156b38 May 17 00:09:53.004021 containerd[1482]: 2025-05-17 00:09:52.966 [INFO][3956] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.22.0/26 handle="k8s-pod-network.60da86769fca948366d5cb06ddaab0f41825db06468e7e26879a5c0d72156b38" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:53.004021 containerd[1482]: 2025-05-17 00:09:52.975 [INFO][3956] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.22.1/26] block=192.168.22.0/26 handle="k8s-pod-network.60da86769fca948366d5cb06ddaab0f41825db06468e7e26879a5c0d72156b38" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:53.004021 containerd[1482]: 2025-05-17 00:09:52.975 [INFO][3956] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.1/26] handle="k8s-pod-network.60da86769fca948366d5cb06ddaab0f41825db06468e7e26879a5c0d72156b38" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:53.004021 containerd[1482]: 2025-05-17 00:09:52.975 [INFO][3956] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:09:53.004021 containerd[1482]: 2025-05-17 00:09:52.975 [INFO][3956] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.22.1/26] IPv6=[] ContainerID="60da86769fca948366d5cb06ddaab0f41825db06468e7e26879a5c0d72156b38" HandleID="k8s-pod-network.60da86769fca948366d5cb06ddaab0f41825db06468e7e26879a5c0d72156b38" Workload="ci--4081--3--3--n--e61ddff57a-k8s-whisker--66d874b469--vd68m-eth0" May 17 00:09:53.004847 containerd[1482]: 2025-05-17 00:09:52.978 [INFO][3944] cni-plugin/k8s.go 418: Populated endpoint ContainerID="60da86769fca948366d5cb06ddaab0f41825db06468e7e26879a5c0d72156b38" Namespace="calico-system" Pod="whisker-66d874b469-vd68m" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-whisker--66d874b469--vd68m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e61ddff57a-k8s-whisker--66d874b469--vd68m-eth0", GenerateName:"whisker-66d874b469-", Namespace:"calico-system", SelfLink:"", UID:"0d2cf33f-bbcd-48f5-a11a-d546875d4c3f", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66d874b469", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e61ddff57a", ContainerID:"", Pod:"whisker-66d874b469-vd68m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.22.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidb98b3cafa6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:09:53.004847 containerd[1482]: 2025-05-17 00:09:52.978 [INFO][3944] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.1/32] ContainerID="60da86769fca948366d5cb06ddaab0f41825db06468e7e26879a5c0d72156b38" Namespace="calico-system" Pod="whisker-66d874b469-vd68m" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-whisker--66d874b469--vd68m-eth0" May 17 00:09:53.004847 containerd[1482]: 2025-05-17 00:09:52.978 [INFO][3944] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb98b3cafa6 ContainerID="60da86769fca948366d5cb06ddaab0f41825db06468e7e26879a5c0d72156b38" Namespace="calico-system" Pod="whisker-66d874b469-vd68m" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-whisker--66d874b469--vd68m-eth0" May 17 00:09:53.004847 containerd[1482]: 2025-05-17 00:09:52.988 [INFO][3944] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60da86769fca948366d5cb06ddaab0f41825db06468e7e26879a5c0d72156b38" Namespace="calico-system" Pod="whisker-66d874b469-vd68m" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-whisker--66d874b469--vd68m-eth0" May 17 00:09:53.004847 containerd[1482]: 2025-05-17 00:09:52.988 [INFO][3944] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="60da86769fca948366d5cb06ddaab0f41825db06468e7e26879a5c0d72156b38" Namespace="calico-system" Pod="whisker-66d874b469-vd68m" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-whisker--66d874b469--vd68m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e61ddff57a-k8s-whisker--66d874b469--vd68m-eth0", GenerateName:"whisker-66d874b469-", Namespace:"calico-system", SelfLink:"", UID:"0d2cf33f-bbcd-48f5-a11a-d546875d4c3f", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66d874b469", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e61ddff57a", ContainerID:"60da86769fca948366d5cb06ddaab0f41825db06468e7e26879a5c0d72156b38", Pod:"whisker-66d874b469-vd68m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.22.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidb98b3cafa6", MAC:"3a:55:dc:1c:1e:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:09:53.004847 containerd[1482]: 2025-05-17 00:09:53.001 [INFO][3944] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="60da86769fca948366d5cb06ddaab0f41825db06468e7e26879a5c0d72156b38" Namespace="calico-system" Pod="whisker-66d874b469-vd68m" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-whisker--66d874b469--vd68m-eth0" May 17 00:09:53.022996 containerd[1482]: time="2025-05-17T00:09:53.022356045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:53.022996 containerd[1482]: time="2025-05-17T00:09:53.022423326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:53.022996 containerd[1482]: time="2025-05-17T00:09:53.022465007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:53.022996 containerd[1482]: time="2025-05-17T00:09:53.022570849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:53.048089 systemd[1]: Started cri-containerd-60da86769fca948366d5cb06ddaab0f41825db06468e7e26879a5c0d72156b38.scope - libcontainer container 60da86769fca948366d5cb06ddaab0f41825db06468e7e26879a5c0d72156b38. May 17 00:09:53.102365 kubelet[2692]: I0517 00:09:53.102317 2692 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93f8cf49-d477-4a58-9b7d-7f4a90b44572" path="/var/lib/kubelet/pods/93f8cf49-d477-4a58-9b7d-7f4a90b44572/volumes" May 17 00:09:53.131288 containerd[1482]: time="2025-05-17T00:09:53.129943160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66d874b469-vd68m,Uid:0d2cf33f-bbcd-48f5-a11a-d546875d4c3f,Namespace:calico-system,Attempt:0,} returns sandbox id \"60da86769fca948366d5cb06ddaab0f41825db06468e7e26879a5c0d72156b38\"" May 17 00:09:53.136237 containerd[1482]: time="2025-05-17T00:09:53.136127470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:09:53.406734 containerd[1482]: time="2025-05-17T00:09:53.406544362Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:09:53.409973 containerd[1482]: time="2025-05-17T00:09:53.408412715Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:09:53.409973 containerd[1482]: time="2025-05-17T00:09:53.408521637Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:09:53.410156 kubelet[2692]: E0517 00:09:53.408688 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:09:53.410156 kubelet[2692]: E0517 00:09:53.408746 2692 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:09:53.421124 kubelet[2692]: E0517 00:09:53.421043 2692 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d2c11851cda5488fbb5d694e1e602685,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xdj9s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66d874b469-vd68m_calico-system(0d2cf33f-bbcd-48f5-a11a-d546875d4c3f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:09:53.429790 containerd[1482]: time="2025-05-17T00:09:53.429478330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:09:53.513799 kernel: bpftool[4130]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 17 00:09:53.652203 containerd[1482]: time="2025-05-17T00:09:53.652108812Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:09:53.653565 containerd[1482]: time="2025-05-17T00:09:53.653479677Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:09:53.653565 containerd[1482]: time="2025-05-17T00:09:53.653530157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:09:53.654463 kubelet[2692]: E0517 00:09:53.653822 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:09:53.654463 kubelet[2692]: E0517 00:09:53.653873 2692 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:09:53.654594 kubelet[2692]: E0517 00:09:53.653979 2692 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xdj9s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66d874b469-vd68m_calico-system(0d2cf33f-bbcd-48f5-a11a-d546875d4c3f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:09:53.657819 kubelet[2692]: E0517 00:09:53.657140 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-66d874b469-vd68m" podUID="0d2cf33f-bbcd-48f5-a11a-d546875d4c3f" May 17 00:09:53.764628 systemd-networkd[1384]: vxlan.calico: Link UP May 17 00:09:53.764636 systemd-networkd[1384]: vxlan.calico: Gained carrier May 17 00:09:54.404579 kubelet[2692]: E0517 00:09:54.404411 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d874b469-vd68m" podUID="0d2cf33f-bbcd-48f5-a11a-d546875d4c3f" May 17 00:09:54.598963 systemd-networkd[1384]: calidb98b3cafa6: Gained IPv6LL May 17 00:09:55.098962 containerd[1482]: time="2025-05-17T00:09:55.098464573Z" level=info msg="StopPodSandbox for \"8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666\"" May 17 00:09:55.110941 systemd-networkd[1384]: vxlan.calico: Gained IPv6LL May 17 00:09:55.205649 containerd[1482]: 2025-05-17 00:09:55.162 [INFO][4234] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" May 17 00:09:55.205649 containerd[1482]: 2025-05-17 00:09:55.162 [INFO][4234] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" iface="eth0" netns="/var/run/netns/cni-09ecfc07-c937-d7b0-2f35-240f42825048" May 17 00:09:55.205649 containerd[1482]: 2025-05-17 00:09:55.163 [INFO][4234] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" iface="eth0" netns="/var/run/netns/cni-09ecfc07-c937-d7b0-2f35-240f42825048" May 17 00:09:55.205649 containerd[1482]: 2025-05-17 00:09:55.163 [INFO][4234] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" iface="eth0" netns="/var/run/netns/cni-09ecfc07-c937-d7b0-2f35-240f42825048" May 17 00:09:55.205649 containerd[1482]: 2025-05-17 00:09:55.163 [INFO][4234] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" May 17 00:09:55.205649 containerd[1482]: 2025-05-17 00:09:55.163 [INFO][4234] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" May 17 00:09:55.205649 containerd[1482]: 2025-05-17 00:09:55.186 [INFO][4242] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" HandleID="k8s-pod-network.8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" Workload="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--r5wvd-eth0" May 17 00:09:55.205649 containerd[1482]: 2025-05-17 00:09:55.186 [INFO][4242] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:09:55.205649 containerd[1482]: 2025-05-17 00:09:55.187 [INFO][4242] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:09:55.205649 containerd[1482]: 2025-05-17 00:09:55.198 [WARNING][4242] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" HandleID="k8s-pod-network.8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" Workload="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--r5wvd-eth0" May 17 00:09:55.205649 containerd[1482]: 2025-05-17 00:09:55.198 [INFO][4242] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" HandleID="k8s-pod-network.8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" Workload="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--r5wvd-eth0" May 17 00:09:55.205649 containerd[1482]: 2025-05-17 00:09:55.201 [INFO][4242] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:09:55.205649 containerd[1482]: 2025-05-17 00:09:55.203 [INFO][4234] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" May 17 00:09:55.206623 containerd[1482]: time="2025-05-17T00:09:55.206439036Z" level=info msg="TearDown network for sandbox \"8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666\" successfully" May 17 00:09:55.206623 containerd[1482]: time="2025-05-17T00:09:55.206479676Z" level=info msg="StopPodSandbox for \"8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666\" returns successfully" May 17 00:09:55.208492 containerd[1482]: time="2025-05-17T00:09:55.208393752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r5wvd,Uid:fd2d0fe1-ef42-47d9-a89f-b98fdf94081f,Namespace:kube-system,Attempt:1,}" May 17 00:09:55.210633 systemd[1]: run-netns-cni\x2d09ecfc07\x2dc937\x2dd7b0\x2d2f35\x2d240f42825048.mount: Deactivated successfully. May 17 00:09:55.364112 systemd-networkd[1384]: calic2abcc195e9: Link UP May 17 00:09:55.364502 systemd-networkd[1384]: calic2abcc195e9: Gained carrier May 17 00:09:55.399801 containerd[1482]: 2025-05-17 00:09:55.270 [INFO][4249] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--r5wvd-eth0 coredns-7c65d6cfc9- kube-system fd2d0fe1-ef42-47d9-a89f-b98fdf94081f 908 0 2025-05-17 00:09:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-n-e61ddff57a coredns-7c65d6cfc9-r5wvd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic2abcc195e9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b564ddcdd62285a6973e20c258e57c8edb7887d2d96372cb8d816605437a36c8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r5wvd" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--r5wvd-" May 17 00:09:55.399801 containerd[1482]: 2025-05-17 00:09:55.271 [INFO][4249] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b564ddcdd62285a6973e20c258e57c8edb7887d2d96372cb8d816605437a36c8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r5wvd" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--r5wvd-eth0" May 17 00:09:55.399801 containerd[1482]: 2025-05-17 00:09:55.303 [INFO][4262] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b564ddcdd62285a6973e20c258e57c8edb7887d2d96372cb8d816605437a36c8" HandleID="k8s-pod-network.b564ddcdd62285a6973e20c258e57c8edb7887d2d96372cb8d816605437a36c8" Workload="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--r5wvd-eth0" May 17 00:09:55.399801 containerd[1482]: 2025-05-17 00:09:55.304 [INFO][4262] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b564ddcdd62285a6973e20c258e57c8edb7887d2d96372cb8d816605437a36c8" HandleID="k8s-pod-network.b564ddcdd62285a6973e20c258e57c8edb7887d2d96372cb8d816605437a36c8" Workload="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--r5wvd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cd7c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-n-e61ddff57a", "pod":"coredns-7c65d6cfc9-r5wvd", "timestamp":"2025-05-17 00:09:55.30363118 +0000 UTC"}, Hostname:"ci-4081-3-3-n-e61ddff57a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:09:55.399801 containerd[1482]: 2025-05-17 00:09:55.304 [INFO][4262] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:09:55.399801 containerd[1482]: 2025-05-17 00:09:55.304 [INFO][4262] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:09:55.399801 containerd[1482]: 2025-05-17 00:09:55.304 [INFO][4262] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-e61ddff57a' May 17 00:09:55.399801 containerd[1482]: 2025-05-17 00:09:55.317 [INFO][4262] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b564ddcdd62285a6973e20c258e57c8edb7887d2d96372cb8d816605437a36c8" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:55.399801 containerd[1482]: 2025-05-17 00:09:55.323 [INFO][4262] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:55.399801 containerd[1482]: 2025-05-17 00:09:55.329 [INFO][4262] ipam/ipam.go 511: Trying affinity for 192.168.22.0/26 host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:55.399801 containerd[1482]: 2025-05-17 00:09:55.332 [INFO][4262] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.0/26 host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:55.399801 containerd[1482]: 2025-05-17 00:09:55.335 [INFO][4262] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.0/26 host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:55.399801 containerd[1482]: 2025-05-17 00:09:55.335 [INFO][4262] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.22.0/26 handle="k8s-pod-network.b564ddcdd62285a6973e20c258e57c8edb7887d2d96372cb8d816605437a36c8" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:55.399801 containerd[1482]: 2025-05-17 00:09:55.338 [INFO][4262] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b564ddcdd62285a6973e20c258e57c8edb7887d2d96372cb8d816605437a36c8 May 17 00:09:55.399801 containerd[1482]: 2025-05-17 00:09:55.344 [INFO][4262] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.22.0/26 handle="k8s-pod-network.b564ddcdd62285a6973e20c258e57c8edb7887d2d96372cb8d816605437a36c8" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:55.399801 containerd[1482]: 2025-05-17 00:09:55.352 [INFO][4262] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.22.2/26] block=192.168.22.0/26 handle="k8s-pod-network.b564ddcdd62285a6973e20c258e57c8edb7887d2d96372cb8d816605437a36c8" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:55.399801 containerd[1482]: 2025-05-17 00:09:55.353 [INFO][4262] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.2/26] handle="k8s-pod-network.b564ddcdd62285a6973e20c258e57c8edb7887d2d96372cb8d816605437a36c8" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:55.399801 containerd[1482]: 2025-05-17 00:09:55.353 [INFO][4262] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:09:55.399801 containerd[1482]: 2025-05-17 00:09:55.353 [INFO][4262] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.22.2/26] IPv6=[] ContainerID="b564ddcdd62285a6973e20c258e57c8edb7887d2d96372cb8d816605437a36c8" HandleID="k8s-pod-network.b564ddcdd62285a6973e20c258e57c8edb7887d2d96372cb8d816605437a36c8" Workload="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--r5wvd-eth0" May 17 00:09:55.400425 containerd[1482]: 2025-05-17 00:09:55.356 [INFO][4249] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b564ddcdd62285a6973e20c258e57c8edb7887d2d96372cb8d816605437a36c8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r5wvd" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--r5wvd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--r5wvd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fd2d0fe1-ef42-47d9-a89f-b98fdf94081f", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e61ddff57a", ContainerID:"", Pod:"coredns-7c65d6cfc9-r5wvd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic2abcc195e9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:09:55.400425 containerd[1482]: 2025-05-17 00:09:55.356 [INFO][4249] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.2/32] ContainerID="b564ddcdd62285a6973e20c258e57c8edb7887d2d96372cb8d816605437a36c8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r5wvd" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--r5wvd-eth0" May 17 00:09:55.400425 containerd[1482]: 2025-05-17 00:09:55.356 [INFO][4249] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic2abcc195e9 ContainerID="b564ddcdd62285a6973e20c258e57c8edb7887d2d96372cb8d816605437a36c8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r5wvd" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--r5wvd-eth0" May 17 00:09:55.400425 containerd[1482]: 2025-05-17 00:09:55.362 [INFO][4249] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b564ddcdd62285a6973e20c258e57c8edb7887d2d96372cb8d816605437a36c8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r5wvd" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--r5wvd-eth0" May 17 00:09:55.400425 containerd[1482]: 2025-05-17 00:09:55.363 [INFO][4249] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b564ddcdd62285a6973e20c258e57c8edb7887d2d96372cb8d816605437a36c8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r5wvd" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--r5wvd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--r5wvd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fd2d0fe1-ef42-47d9-a89f-b98fdf94081f", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e61ddff57a", ContainerID:"b564ddcdd62285a6973e20c258e57c8edb7887d2d96372cb8d816605437a36c8", Pod:"coredns-7c65d6cfc9-r5wvd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic2abcc195e9", MAC:"2e:fe:7b:ac:1a:e0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:09:55.400425 containerd[1482]: 2025-05-17 00:09:55.395 [INFO][4249] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b564ddcdd62285a6973e20c258e57c8edb7887d2d96372cb8d816605437a36c8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r5wvd" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--r5wvd-eth0" May 17 00:09:55.430217 containerd[1482]: time="2025-05-17T00:09:55.429949779Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:55.430217 containerd[1482]: time="2025-05-17T00:09:55.430023621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:55.430217 containerd[1482]: time="2025-05-17T00:09:55.430044061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:55.430217 containerd[1482]: time="2025-05-17T00:09:55.430164943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:55.471007 systemd[1]: Started cri-containerd-b564ddcdd62285a6973e20c258e57c8edb7887d2d96372cb8d816605437a36c8.scope - libcontainer container b564ddcdd62285a6973e20c258e57c8edb7887d2d96372cb8d816605437a36c8. May 17 00:09:55.523259 containerd[1482]: time="2025-05-17T00:09:55.523202212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r5wvd,Uid:fd2d0fe1-ef42-47d9-a89f-b98fdf94081f,Namespace:kube-system,Attempt:1,} returns sandbox id \"b564ddcdd62285a6973e20c258e57c8edb7887d2d96372cb8d816605437a36c8\"" May 17 00:09:55.543342 containerd[1482]: time="2025-05-17T00:09:55.543301261Z" level=info msg="CreateContainer within sandbox \"b564ddcdd62285a6973e20c258e57c8edb7887d2d96372cb8d816605437a36c8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:09:55.557940 containerd[1482]: time="2025-05-17T00:09:55.557703765Z" level=info msg="CreateContainer within sandbox \"b564ddcdd62285a6973e20c258e57c8edb7887d2d96372cb8d816605437a36c8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a9ef478a584392fd4b5db0c1ad60fb2adbebbfd383f91960098ac95cace8f8f0\"" May 17 00:09:55.559403 containerd[1482]: time="2025-05-17T00:09:55.559235753Z" level=info msg="StartContainer for \"a9ef478a584392fd4b5db0c1ad60fb2adbebbfd383f91960098ac95cace8f8f0\"" May 17 00:09:55.587996 systemd[1]: Started cri-containerd-a9ef478a584392fd4b5db0c1ad60fb2adbebbfd383f91960098ac95cace8f8f0.scope - libcontainer container a9ef478a584392fd4b5db0c1ad60fb2adbebbfd383f91960098ac95cace8f8f0. May 17 00:09:55.624303 containerd[1482]: time="2025-05-17T00:09:55.623956142Z" level=info msg="StartContainer for \"a9ef478a584392fd4b5db0c1ad60fb2adbebbfd383f91960098ac95cace8f8f0\" returns successfully" May 17 00:09:56.098284 containerd[1482]: time="2025-05-17T00:09:56.098127954Z" level=info msg="StopPodSandbox for \"5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c\"" May 17 00:09:56.193491 containerd[1482]: 2025-05-17 00:09:56.147 [INFO][4364] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" May 17 00:09:56.193491 containerd[1482]: 2025-05-17 00:09:56.148 [INFO][4364] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" iface="eth0" netns="/var/run/netns/cni-937a8d9e-71c4-3028-67a7-d900c100ca7a" May 17 00:09:56.193491 containerd[1482]: 2025-05-17 00:09:56.151 [INFO][4364] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" iface="eth0" netns="/var/run/netns/cni-937a8d9e-71c4-3028-67a7-d900c100ca7a" May 17 00:09:56.193491 containerd[1482]: 2025-05-17 00:09:56.151 [INFO][4364] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" iface="eth0" netns="/var/run/netns/cni-937a8d9e-71c4-3028-67a7-d900c100ca7a" May 17 00:09:56.193491 containerd[1482]: 2025-05-17 00:09:56.151 [INFO][4364] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" May 17 00:09:56.193491 containerd[1482]: 2025-05-17 00:09:56.151 [INFO][4364] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" May 17 00:09:56.193491 containerd[1482]: 2025-05-17 00:09:56.176 [INFO][4371] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" HandleID="k8s-pod-network.5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" Workload="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--cvkpr-eth0" May 17 00:09:56.193491 containerd[1482]: 2025-05-17 00:09:56.176 [INFO][4371] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:09:56.193491 containerd[1482]: 2025-05-17 00:09:56.176 [INFO][4371] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:09:56.193491 containerd[1482]: 2025-05-17 00:09:56.188 [WARNING][4371] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" HandleID="k8s-pod-network.5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" Workload="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--cvkpr-eth0" May 17 00:09:56.193491 containerd[1482]: 2025-05-17 00:09:56.188 [INFO][4371] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" HandleID="k8s-pod-network.5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" Workload="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--cvkpr-eth0" May 17 00:09:56.193491 containerd[1482]: 2025-05-17 00:09:56.190 [INFO][4371] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:09:56.193491 containerd[1482]: 2025-05-17 00:09:56.191 [INFO][4364] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" May 17 00:09:56.194602 containerd[1482]: time="2025-05-17T00:09:56.194006140Z" level=info msg="TearDown network for sandbox \"5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c\" successfully" May 17 00:09:56.194602 containerd[1482]: time="2025-05-17T00:09:56.194043181Z" level=info msg="StopPodSandbox for \"5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c\" returns successfully" May 17 00:09:56.195315 containerd[1482]: time="2025-05-17T00:09:56.195276884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-cvkpr,Uid:65842dcb-5f73-429f-9e8d-0092e98ecc3e,Namespace:kube-system,Attempt:1,}" May 17 00:09:56.216229 systemd[1]: run-netns-cni\x2d937a8d9e\x2d71c4\x2d3028\x2d67a7\x2dd900c100ca7a.mount: Deactivated successfully. May 17 00:09:56.347176 systemd-networkd[1384]: calif0f4c78ed31: Link UP May 17 00:09:56.348428 systemd-networkd[1384]: calif0f4c78ed31: Gained carrier May 17 00:09:56.365275 containerd[1482]: 2025-05-17 00:09:56.255 [INFO][4377] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--cvkpr-eth0 coredns-7c65d6cfc9- kube-system 65842dcb-5f73-429f-9e8d-0092e98ecc3e 917 0 2025-05-17 00:09:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-n-e61ddff57a coredns-7c65d6cfc9-cvkpr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif0f4c78ed31 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ff2a257f8c304b4e91d641e251d31a9716426dea10ca1b94df828529592b303a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-cvkpr" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--cvkpr-" May 17 00:09:56.365275 containerd[1482]: 2025-05-17 00:09:56.255 [INFO][4377] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ff2a257f8c304b4e91d641e251d31a9716426dea10ca1b94df828529592b303a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-cvkpr" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--cvkpr-eth0" May 17 00:09:56.365275 containerd[1482]: 2025-05-17 00:09:56.284 [INFO][4390] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff2a257f8c304b4e91d641e251d31a9716426dea10ca1b94df828529592b303a" HandleID="k8s-pod-network.ff2a257f8c304b4e91d641e251d31a9716426dea10ca1b94df828529592b303a" Workload="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--cvkpr-eth0" May 17 00:09:56.365275 containerd[1482]: 2025-05-17 00:09:56.284 [INFO][4390] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ff2a257f8c304b4e91d641e251d31a9716426dea10ca1b94df828529592b303a" HandleID="k8s-pod-network.ff2a257f8c304b4e91d641e251d31a9716426dea10ca1b94df828529592b303a" Workload="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--cvkpr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400022f670), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-n-e61ddff57a", "pod":"coredns-7c65d6cfc9-cvkpr", "timestamp":"2025-05-17 00:09:56.284033017 +0000 UTC"}, Hostname:"ci-4081-3-3-n-e61ddff57a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:09:56.365275 containerd[1482]: 2025-05-17 00:09:56.284 [INFO][4390] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:09:56.365275 containerd[1482]: 2025-05-17 00:09:56.284 [INFO][4390] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:09:56.365275 containerd[1482]: 2025-05-17 00:09:56.284 [INFO][4390] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-e61ddff57a' May 17 00:09:56.365275 containerd[1482]: 2025-05-17 00:09:56.299 [INFO][4390] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ff2a257f8c304b4e91d641e251d31a9716426dea10ca1b94df828529592b303a" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:56.365275 containerd[1482]: 2025-05-17 00:09:56.308 [INFO][4390] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:56.365275 containerd[1482]: 2025-05-17 00:09:56.316 [INFO][4390] ipam/ipam.go 511: Trying affinity for 192.168.22.0/26 host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:56.365275 containerd[1482]: 2025-05-17 00:09:56.319 [INFO][4390] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.0/26 host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:56.365275 containerd[1482]: 2025-05-17 00:09:56.323 [INFO][4390] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.0/26 host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:56.365275 containerd[1482]: 2025-05-17 00:09:56.323 [INFO][4390] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.22.0/26 handle="k8s-pod-network.ff2a257f8c304b4e91d641e251d31a9716426dea10ca1b94df828529592b303a" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:56.365275 containerd[1482]: 2025-05-17 00:09:56.325 [INFO][4390] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ff2a257f8c304b4e91d641e251d31a9716426dea10ca1b94df828529592b303a May 17 00:09:56.365275 containerd[1482]: 2025-05-17 00:09:56.331 [INFO][4390] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.22.0/26 handle="k8s-pod-network.ff2a257f8c304b4e91d641e251d31a9716426dea10ca1b94df828529592b303a" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:56.365275 containerd[1482]: 2025-05-17 00:09:56.341 [INFO][4390] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.22.3/26] block=192.168.22.0/26 handle="k8s-pod-network.ff2a257f8c304b4e91d641e251d31a9716426dea10ca1b94df828529592b303a" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:56.365275 containerd[1482]: 2025-05-17 00:09:56.341 [INFO][4390] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.3/26] handle="k8s-pod-network.ff2a257f8c304b4e91d641e251d31a9716426dea10ca1b94df828529592b303a" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:56.365275 containerd[1482]: 2025-05-17 00:09:56.341 [INFO][4390] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:09:56.365275 containerd[1482]: 2025-05-17 00:09:56.341 [INFO][4390] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.22.3/26] IPv6=[] ContainerID="ff2a257f8c304b4e91d641e251d31a9716426dea10ca1b94df828529592b303a" HandleID="k8s-pod-network.ff2a257f8c304b4e91d641e251d31a9716426dea10ca1b94df828529592b303a" Workload="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--cvkpr-eth0" May 17 00:09:56.365946 containerd[1482]: 2025-05-17 00:09:56.343 [INFO][4377] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ff2a257f8c304b4e91d641e251d31a9716426dea10ca1b94df828529592b303a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-cvkpr" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--cvkpr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--cvkpr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"65842dcb-5f73-429f-9e8d-0092e98ecc3e", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e61ddff57a", ContainerID:"", Pod:"coredns-7c65d6cfc9-cvkpr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0f4c78ed31", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:09:56.365946 containerd[1482]: 2025-05-17 00:09:56.344 [INFO][4377] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.3/32] ContainerID="ff2a257f8c304b4e91d641e251d31a9716426dea10ca1b94df828529592b303a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-cvkpr" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--cvkpr-eth0" May 17 00:09:56.365946 containerd[1482]: 2025-05-17 00:09:56.344 [INFO][4377] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0f4c78ed31 ContainerID="ff2a257f8c304b4e91d641e251d31a9716426dea10ca1b94df828529592b303a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-cvkpr" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--cvkpr-eth0" May 17 00:09:56.365946 containerd[1482]: 2025-05-17 00:09:56.348 [INFO][4377] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff2a257f8c304b4e91d641e251d31a9716426dea10ca1b94df828529592b303a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-cvkpr" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--cvkpr-eth0" May 17 00:09:56.365946 containerd[1482]: 2025-05-17 00:09:56.349 [INFO][4377] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ff2a257f8c304b4e91d641e251d31a9716426dea10ca1b94df828529592b303a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-cvkpr" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--cvkpr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--cvkpr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"65842dcb-5f73-429f-9e8d-0092e98ecc3e", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e61ddff57a", ContainerID:"ff2a257f8c304b4e91d641e251d31a9716426dea10ca1b94df828529592b303a", Pod:"coredns-7c65d6cfc9-cvkpr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0f4c78ed31", MAC:"de:1b:35:f0:51:48", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:09:56.365946 containerd[1482]: 2025-05-17 00:09:56.361 [INFO][4377] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ff2a257f8c304b4e91d641e251d31a9716426dea10ca1b94df828529592b303a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-cvkpr" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--cvkpr-eth0" May 17 00:09:56.393475 containerd[1482]: time="2025-05-17T00:09:56.393342494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:56.393837 containerd[1482]: time="2025-05-17T00:09:56.393625019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:56.393837 containerd[1482]: time="2025-05-17T00:09:56.393641539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:56.394183 containerd[1482]: time="2025-05-17T00:09:56.394097348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:56.433360 systemd[1]: Started cri-containerd-ff2a257f8c304b4e91d641e251d31a9716426dea10ca1b94df828529592b303a.scope - libcontainer container ff2a257f8c304b4e91d641e251d31a9716426dea10ca1b94df828529592b303a. May 17 00:09:56.437620 kubelet[2692]: I0517 00:09:56.437550 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-r5wvd" podStartSLOduration=40.437532357 podStartE2EDuration="40.437532357s" podCreationTimestamp="2025-05-17 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:09:56.433513562 +0000 UTC m=+45.479712965" watchObservedRunningTime="2025-05-17 00:09:56.437532357 +0000 UTC m=+45.483731720" May 17 00:09:56.505527 containerd[1482]: time="2025-05-17T00:09:56.505458382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-cvkpr,Uid:65842dcb-5f73-429f-9e8d-0092e98ecc3e,Namespace:kube-system,Attempt:1,} returns sandbox id \"ff2a257f8c304b4e91d641e251d31a9716426dea10ca1b94df828529592b303a\"" May 17 00:09:56.516937 containerd[1482]: time="2025-05-17T00:09:56.516706592Z" level=info msg="CreateContainer within sandbox \"ff2a257f8c304b4e91d641e251d31a9716426dea10ca1b94df828529592b303a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:09:56.536469 containerd[1482]: time="2025-05-17T00:09:56.536330878Z" level=info msg="CreateContainer within sandbox \"ff2a257f8c304b4e91d641e251d31a9716426dea10ca1b94df828529592b303a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8c98c86fd5af0d5574c9f2edcb65715f1ef21314de9e4fdc6572f6bb01db87e2\"" May 17 00:09:56.537853 containerd[1482]: time="2025-05-17T00:09:56.537258335Z" level=info msg="StartContainer for \"8c98c86fd5af0d5574c9f2edcb65715f1ef21314de9e4fdc6572f6bb01db87e2\"" May 17 00:09:56.566201 systemd[1]: Started cri-containerd-8c98c86fd5af0d5574c9f2edcb65715f1ef21314de9e4fdc6572f6bb01db87e2.scope - libcontainer container 8c98c86fd5af0d5574c9f2edcb65715f1ef21314de9e4fdc6572f6bb01db87e2. May 17 00:09:56.598107 containerd[1482]: time="2025-05-17T00:09:56.598058228Z" level=info msg="StartContainer for \"8c98c86fd5af0d5574c9f2edcb65715f1ef21314de9e4fdc6572f6bb01db87e2\" returns successfully" May 17 00:09:56.711160 systemd-networkd[1384]: calic2abcc195e9: Gained IPv6LL May 17 00:09:57.414400 systemd-networkd[1384]: calif0f4c78ed31: Gained IPv6LL May 17 00:09:58.098722 containerd[1482]: time="2025-05-17T00:09:58.098405889Z" level=info msg="StopPodSandbox for \"5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af\"" May 17 00:09:58.098722 containerd[1482]: time="2025-05-17T00:09:58.098450250Z" level=info msg="StopPodSandbox for \"1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753\"" May 17 00:09:58.172882 kubelet[2692]: I0517 00:09:58.171972 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-cvkpr" podStartSLOduration=42.171950217 podStartE2EDuration="42.171950217s" podCreationTimestamp="2025-05-17 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:09:57.46612608 +0000 UTC m=+46.512325443" watchObservedRunningTime="2025-05-17 00:09:58.171950217 +0000 UTC m=+47.218149580" May 17 00:09:58.236086 containerd[1482]: 2025-05-17 00:09:58.175 [INFO][4507] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" May 17 00:09:58.236086 containerd[1482]: 2025-05-17 00:09:58.175 [INFO][4507] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" iface="eth0" netns="/var/run/netns/cni-7bb245c0-c449-975d-7c70-c8d9199770f1" May 17 00:09:58.236086 containerd[1482]: 2025-05-17 00:09:58.176 [INFO][4507] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" iface="eth0" netns="/var/run/netns/cni-7bb245c0-c449-975d-7c70-c8d9199770f1" May 17 00:09:58.236086 containerd[1482]: 2025-05-17 00:09:58.180 [INFO][4507] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" iface="eth0" netns="/var/run/netns/cni-7bb245c0-c449-975d-7c70-c8d9199770f1" May 17 00:09:58.236086 containerd[1482]: 2025-05-17 00:09:58.184 [INFO][4507] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" May 17 00:09:58.236086 containerd[1482]: 2025-05-17 00:09:58.184 [INFO][4507] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" May 17 00:09:58.236086 containerd[1482]: 2025-05-17 00:09:58.214 [INFO][4523] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" HandleID="k8s-pod-network.1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--dhnhf-eth0" May 17 00:09:58.236086 containerd[1482]: 2025-05-17 00:09:58.215 [INFO][4523] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:09:58.236086 containerd[1482]: 2025-05-17 00:09:58.215 [INFO][4523] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:09:58.236086 containerd[1482]: 2025-05-17 00:09:58.228 [WARNING][4523] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" HandleID="k8s-pod-network.1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--dhnhf-eth0" May 17 00:09:58.236086 containerd[1482]: 2025-05-17 00:09:58.228 [INFO][4523] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" HandleID="k8s-pod-network.1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--dhnhf-eth0" May 17 00:09:58.236086 containerd[1482]: 2025-05-17 00:09:58.231 [INFO][4523] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:09:58.236086 containerd[1482]: 2025-05-17 00:09:58.234 [INFO][4507] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" May 17 00:09:58.239939 containerd[1482]: time="2025-05-17T00:09:58.239880557Z" level=info msg="TearDown network for sandbox \"1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753\" successfully" May 17 00:09:58.239939 containerd[1482]: time="2025-05-17T00:09:58.239927038Z" level=info msg="StopPodSandbox for \"1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753\" returns successfully" May 17 00:09:58.241950 systemd[1]: run-netns-cni\x2d7bb245c0\x2dc449\x2d975d\x2d7c70\x2dc8d9199770f1.mount: Deactivated successfully. May 17 00:09:58.250753 containerd[1482]: time="2025-05-17T00:09:58.250608363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75d5dfcff5-dhnhf,Uid:96303885-efee-4645-a557-34a808ca80dd,Namespace:calico-apiserver,Attempt:1,}" May 17 00:09:58.255798 containerd[1482]: 2025-05-17 00:09:58.178 [INFO][4511] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" May 17 00:09:58.255798 containerd[1482]: 2025-05-17 00:09:58.178 [INFO][4511] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" iface="eth0" netns="/var/run/netns/cni-dc0e1a9f-4533-bfba-a79a-d190b7f59cd5" May 17 00:09:58.255798 containerd[1482]: 2025-05-17 00:09:58.179 [INFO][4511] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" iface="eth0" netns="/var/run/netns/cni-dc0e1a9f-4533-bfba-a79a-d190b7f59cd5" May 17 00:09:58.255798 containerd[1482]: 2025-05-17 00:09:58.181 [INFO][4511] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" iface="eth0" netns="/var/run/netns/cni-dc0e1a9f-4533-bfba-a79a-d190b7f59cd5" May 17 00:09:58.255798 containerd[1482]: 2025-05-17 00:09:58.181 [INFO][4511] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" May 17 00:09:58.255798 containerd[1482]: 2025-05-17 00:09:58.181 [INFO][4511] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" May 17 00:09:58.255798 containerd[1482]: 2025-05-17 00:09:58.224 [INFO][4521] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" HandleID="k8s-pod-network.5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" Workload="ci--4081--3--3--n--e61ddff57a-k8s-goldmane--8f77d7b6c--522bq-eth0" May 17 00:09:58.255798 containerd[1482]: 2025-05-17 00:09:58.225 [INFO][4521] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:09:58.255798 containerd[1482]: 2025-05-17 00:09:58.231 [INFO][4521] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:09:58.255798 containerd[1482]: 2025-05-17 00:09:58.249 [WARNING][4521] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" HandleID="k8s-pod-network.5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" Workload="ci--4081--3--3--n--e61ddff57a-k8s-goldmane--8f77d7b6c--522bq-eth0" May 17 00:09:58.255798 containerd[1482]: 2025-05-17 00:09:58.249 [INFO][4521] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" HandleID="k8s-pod-network.5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" Workload="ci--4081--3--3--n--e61ddff57a-k8s-goldmane--8f77d7b6c--522bq-eth0" May 17 00:09:58.255798 containerd[1482]: 2025-05-17 00:09:58.251 [INFO][4521] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:09:58.255798 containerd[1482]: 2025-05-17 00:09:58.253 [INFO][4511] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" May 17 00:09:58.256360 containerd[1482]: time="2025-05-17T00:09:58.255912384Z" level=info msg="TearDown network for sandbox \"5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af\" successfully" May 17 00:09:58.256360 containerd[1482]: time="2025-05-17T00:09:58.255942825Z" level=info msg="StopPodSandbox for \"5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af\" returns successfully" May 17 00:09:58.257410 containerd[1482]: time="2025-05-17T00:09:58.257100367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-522bq,Uid:c2a53ade-ef79-46ee-b2b9-636e3cc942be,Namespace:calico-system,Attempt:1,}" May 17 00:09:58.260737 systemd[1]: run-netns-cni\x2ddc0e1a9f\x2d4533\x2dbfba\x2da79a\x2dd190b7f59cd5.mount: Deactivated successfully. May 17 00:09:58.480522 systemd-networkd[1384]: cali8a9886a35df: Link UP May 17 00:09:58.482749 systemd-networkd[1384]: cali8a9886a35df: Gained carrier May 17 00:09:58.506348 containerd[1482]: 2025-05-17 00:09:58.335 [INFO][4535] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--dhnhf-eth0 calico-apiserver-75d5dfcff5- calico-apiserver 96303885-efee-4645-a557-34a808ca80dd 940 0 2025-05-17 00:09:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:75d5dfcff5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-n-e61ddff57a calico-apiserver-75d5dfcff5-dhnhf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8a9886a35df [] [] }} ContainerID="5040c502408a5c3c469fff1f9e22c61b1c36fe2d19574300e62dc419109b5e70" Namespace="calico-apiserver" Pod="calico-apiserver-75d5dfcff5-dhnhf" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--dhnhf-" May 17 00:09:58.506348 containerd[1482]: 2025-05-17 00:09:58.336 [INFO][4535] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5040c502408a5c3c469fff1f9e22c61b1c36fe2d19574300e62dc419109b5e70" Namespace="calico-apiserver" Pod="calico-apiserver-75d5dfcff5-dhnhf" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--dhnhf-eth0" May 17 00:09:58.506348 containerd[1482]: 2025-05-17 00:09:58.388 [INFO][4558] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5040c502408a5c3c469fff1f9e22c61b1c36fe2d19574300e62dc419109b5e70" HandleID="k8s-pod-network.5040c502408a5c3c469fff1f9e22c61b1c36fe2d19574300e62dc419109b5e70" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--dhnhf-eth0" May 17 00:09:58.506348 containerd[1482]: 2025-05-17 00:09:58.390 [INFO][4558] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5040c502408a5c3c469fff1f9e22c61b1c36fe2d19574300e62dc419109b5e70" HandleID="k8s-pod-network.5040c502408a5c3c469fff1f9e22c61b1c36fe2d19574300e62dc419109b5e70" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--dhnhf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000331030), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-n-e61ddff57a", "pod":"calico-apiserver-75d5dfcff5-dhnhf", "timestamp":"2025-05-17 00:09:58.38837368 +0000 UTC"}, Hostname:"ci-4081-3-3-n-e61ddff57a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:09:58.506348 containerd[1482]: 2025-05-17 00:09:58.391 [INFO][4558] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:09:58.506348 containerd[1482]: 2025-05-17 00:09:58.391 [INFO][4558] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:09:58.506348 containerd[1482]: 2025-05-17 00:09:58.391 [INFO][4558] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-e61ddff57a' May 17 00:09:58.506348 containerd[1482]: 2025-05-17 00:09:58.408 [INFO][4558] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5040c502408a5c3c469fff1f9e22c61b1c36fe2d19574300e62dc419109b5e70" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:58.506348 containerd[1482]: 2025-05-17 00:09:58.418 [INFO][4558] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:58.506348 containerd[1482]: 2025-05-17 00:09:58.427 [INFO][4558] ipam/ipam.go 511: Trying affinity for 192.168.22.0/26 host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:58.506348 containerd[1482]: 2025-05-17 00:09:58.431 [INFO][4558] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.0/26 host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:58.506348 containerd[1482]: 2025-05-17 00:09:58.436 [INFO][4558] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.0/26 host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:58.506348 containerd[1482]: 2025-05-17 00:09:58.436 [INFO][4558] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.22.0/26 handle="k8s-pod-network.5040c502408a5c3c469fff1f9e22c61b1c36fe2d19574300e62dc419109b5e70" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:58.506348 containerd[1482]: 2025-05-17 00:09:58.438 [INFO][4558] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5040c502408a5c3c469fff1f9e22c61b1c36fe2d19574300e62dc419109b5e70 May 17 00:09:58.506348 containerd[1482]: 2025-05-17 00:09:58.450 [INFO][4558] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.22.0/26 handle="k8s-pod-network.5040c502408a5c3c469fff1f9e22c61b1c36fe2d19574300e62dc419109b5e70" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:58.506348 containerd[1482]: 2025-05-17 00:09:58.467 [INFO][4558] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.22.4/26] block=192.168.22.0/26 handle="k8s-pod-network.5040c502408a5c3c469fff1f9e22c61b1c36fe2d19574300e62dc419109b5e70" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:58.506348 containerd[1482]: 2025-05-17 00:09:58.467 [INFO][4558] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.4/26] handle="k8s-pod-network.5040c502408a5c3c469fff1f9e22c61b1c36fe2d19574300e62dc419109b5e70" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:58.506348 containerd[1482]: 2025-05-17 00:09:58.467 [INFO][4558] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:09:58.506348 containerd[1482]: 2025-05-17 00:09:58.468 [INFO][4558] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.22.4/26] IPv6=[] ContainerID="5040c502408a5c3c469fff1f9e22c61b1c36fe2d19574300e62dc419109b5e70" HandleID="k8s-pod-network.5040c502408a5c3c469fff1f9e22c61b1c36fe2d19574300e62dc419109b5e70" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--dhnhf-eth0" May 17 00:09:58.509148 containerd[1482]: 2025-05-17 00:09:58.472 [INFO][4535] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5040c502408a5c3c469fff1f9e22c61b1c36fe2d19574300e62dc419109b5e70" Namespace="calico-apiserver" Pod="calico-apiserver-75d5dfcff5-dhnhf" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--dhnhf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--dhnhf-eth0", GenerateName:"calico-apiserver-75d5dfcff5-", Namespace:"calico-apiserver", SelfLink:"", UID:"96303885-efee-4645-a557-34a808ca80dd", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75d5dfcff5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e61ddff57a", ContainerID:"", Pod:"calico-apiserver-75d5dfcff5-dhnhf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8a9886a35df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:09:58.509148 containerd[1482]: 2025-05-17 00:09:58.473 [INFO][4535] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.4/32] ContainerID="5040c502408a5c3c469fff1f9e22c61b1c36fe2d19574300e62dc419109b5e70" Namespace="calico-apiserver" Pod="calico-apiserver-75d5dfcff5-dhnhf" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--dhnhf-eth0" May 17 00:09:58.509148 containerd[1482]: 2025-05-17 00:09:58.473 [INFO][4535] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8a9886a35df ContainerID="5040c502408a5c3c469fff1f9e22c61b1c36fe2d19574300e62dc419109b5e70" Namespace="calico-apiserver" Pod="calico-apiserver-75d5dfcff5-dhnhf" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--dhnhf-eth0" May 17 00:09:58.509148 containerd[1482]: 2025-05-17 00:09:58.479 [INFO][4535] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5040c502408a5c3c469fff1f9e22c61b1c36fe2d19574300e62dc419109b5e70" Namespace="calico-apiserver" Pod="calico-apiserver-75d5dfcff5-dhnhf" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--dhnhf-eth0" May 17 00:09:58.509148 containerd[1482]: 2025-05-17 00:09:58.484 [INFO][4535] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5040c502408a5c3c469fff1f9e22c61b1c36fe2d19574300e62dc419109b5e70" Namespace="calico-apiserver" Pod="calico-apiserver-75d5dfcff5-dhnhf" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--dhnhf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--dhnhf-eth0", GenerateName:"calico-apiserver-75d5dfcff5-", Namespace:"calico-apiserver", SelfLink:"", UID:"96303885-efee-4645-a557-34a808ca80dd", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75d5dfcff5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e61ddff57a", ContainerID:"5040c502408a5c3c469fff1f9e22c61b1c36fe2d19574300e62dc419109b5e70", Pod:"calico-apiserver-75d5dfcff5-dhnhf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8a9886a35df", MAC:"b6:85:9b:3a:09:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:09:58.509148 containerd[1482]: 2025-05-17 00:09:58.500 [INFO][4535] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5040c502408a5c3c469fff1f9e22c61b1c36fe2d19574300e62dc419109b5e70" Namespace="calico-apiserver" Pod="calico-apiserver-75d5dfcff5-dhnhf" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--dhnhf-eth0" May 17 00:09:58.558799 containerd[1482]: time="2025-05-17T00:09:58.557752562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:58.558799 containerd[1482]: time="2025-05-17T00:09:58.557865845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:58.558799 containerd[1482]: time="2025-05-17T00:09:58.557877925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:58.558799 containerd[1482]: time="2025-05-17T00:09:58.557972087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:58.594120 systemd[1]: Started cri-containerd-5040c502408a5c3c469fff1f9e22c61b1c36fe2d19574300e62dc419109b5e70.scope - libcontainer container 5040c502408a5c3c469fff1f9e22c61b1c36fe2d19574300e62dc419109b5e70. May 17 00:09:58.602586 systemd-networkd[1384]: cali43538a0f6e5: Link UP May 17 00:09:58.603486 systemd-networkd[1384]: cali43538a0f6e5: Gained carrier May 17 00:09:58.626540 containerd[1482]: 2025-05-17 00:09:58.342 [INFO][4539] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--e61ddff57a-k8s-goldmane--8f77d7b6c--522bq-eth0 goldmane-8f77d7b6c- calico-system c2a53ade-ef79-46ee-b2b9-636e3cc942be 941 0 2025-05-17 00:09:34 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:8f77d7b6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-3-n-e61ddff57a goldmane-8f77d7b6c-522bq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali43538a0f6e5 [] [] }} ContainerID="1dc11b0228c72b6b08389d569cbeae584091380bf8a1bb8589e60e7058e83b73" Namespace="calico-system" Pod="goldmane-8f77d7b6c-522bq" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-goldmane--8f77d7b6c--522bq-" May 17 00:09:58.626540 containerd[1482]: 2025-05-17 00:09:58.342 [INFO][4539] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1dc11b0228c72b6b08389d569cbeae584091380bf8a1bb8589e60e7058e83b73" Namespace="calico-system" Pod="goldmane-8f77d7b6c-522bq" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-goldmane--8f77d7b6c--522bq-eth0" May 17 00:09:58.626540 containerd[1482]: 2025-05-17 00:09:58.400 [INFO][4563] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1dc11b0228c72b6b08389d569cbeae584091380bf8a1bb8589e60e7058e83b73" HandleID="k8s-pod-network.1dc11b0228c72b6b08389d569cbeae584091380bf8a1bb8589e60e7058e83b73" Workload="ci--4081--3--3--n--e61ddff57a-k8s-goldmane--8f77d7b6c--522bq-eth0" May 17 00:09:58.626540 containerd[1482]: 2025-05-17 00:09:58.400 [INFO][4563] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1dc11b0228c72b6b08389d569cbeae584091380bf8a1bb8589e60e7058e83b73" HandleID="k8s-pod-network.1dc11b0228c72b6b08389d569cbeae584091380bf8a1bb8589e60e7058e83b73" Workload="ci--4081--3--3--n--e61ddff57a-k8s-goldmane--8f77d7b6c--522bq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d7040), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-e61ddff57a", "pod":"goldmane-8f77d7b6c-522bq", "timestamp":"2025-05-17 00:09:58.400053584 +0000 UTC"}, Hostname:"ci-4081-3-3-n-e61ddff57a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:09:58.626540 containerd[1482]: 2025-05-17 00:09:58.400 [INFO][4563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:09:58.626540 containerd[1482]: 2025-05-17 00:09:58.467 [INFO][4563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:09:58.626540 containerd[1482]: 2025-05-17 00:09:58.468 [INFO][4563] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-e61ddff57a' May 17 00:09:58.626540 containerd[1482]: 2025-05-17 00:09:58.515 [INFO][4563] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1dc11b0228c72b6b08389d569cbeae584091380bf8a1bb8589e60e7058e83b73" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:58.626540 containerd[1482]: 2025-05-17 00:09:58.524 [INFO][4563] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:58.626540 containerd[1482]: 2025-05-17 00:09:58.543 [INFO][4563] ipam/ipam.go 511: Trying affinity for 192.168.22.0/26 host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:58.626540 containerd[1482]: 2025-05-17 00:09:58.550 [INFO][4563] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.0/26 host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:58.626540 containerd[1482]: 2025-05-17 00:09:58.558 [INFO][4563] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.0/26 host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:58.626540 containerd[1482]: 2025-05-17 00:09:58.559 [INFO][4563] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.22.0/26 handle="k8s-pod-network.1dc11b0228c72b6b08389d569cbeae584091380bf8a1bb8589e60e7058e83b73" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:58.626540 containerd[1482]: 2025-05-17 00:09:58.563 [INFO][4563] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1dc11b0228c72b6b08389d569cbeae584091380bf8a1bb8589e60e7058e83b73 May 17 00:09:58.626540 containerd[1482]: 2025-05-17 00:09:58.574 [INFO][4563] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.22.0/26 handle="k8s-pod-network.1dc11b0228c72b6b08389d569cbeae584091380bf8a1bb8589e60e7058e83b73" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:58.626540 containerd[1482]: 2025-05-17 00:09:58.593 [INFO][4563] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.22.5/26] block=192.168.22.0/26 handle="k8s-pod-network.1dc11b0228c72b6b08389d569cbeae584091380bf8a1bb8589e60e7058e83b73" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:58.626540 containerd[1482]: 2025-05-17 00:09:58.593 [INFO][4563] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.5/26] handle="k8s-pod-network.1dc11b0228c72b6b08389d569cbeae584091380bf8a1bb8589e60e7058e83b73" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:58.626540 containerd[1482]: 2025-05-17 00:09:58.593 [INFO][4563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:09:58.626540 containerd[1482]: 2025-05-17 00:09:58.593 [INFO][4563] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.22.5/26] IPv6=[] ContainerID="1dc11b0228c72b6b08389d569cbeae584091380bf8a1bb8589e60e7058e83b73" HandleID="k8s-pod-network.1dc11b0228c72b6b08389d569cbeae584091380bf8a1bb8589e60e7058e83b73" Workload="ci--4081--3--3--n--e61ddff57a-k8s-goldmane--8f77d7b6c--522bq-eth0" May 17 00:09:58.629199 containerd[1482]: 2025-05-17 00:09:58.598 [INFO][4539] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1dc11b0228c72b6b08389d569cbeae584091380bf8a1bb8589e60e7058e83b73" Namespace="calico-system" Pod="goldmane-8f77d7b6c-522bq" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-goldmane--8f77d7b6c--522bq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e61ddff57a-k8s-goldmane--8f77d7b6c--522bq-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"c2a53ade-ef79-46ee-b2b9-636e3cc942be", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e61ddff57a", ContainerID:"", Pod:"goldmane-8f77d7b6c-522bq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.22.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali43538a0f6e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:09:58.629199 containerd[1482]: 2025-05-17 00:09:58.598 [INFO][4539] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.5/32] ContainerID="1dc11b0228c72b6b08389d569cbeae584091380bf8a1bb8589e60e7058e83b73" Namespace="calico-system" Pod="goldmane-8f77d7b6c-522bq" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-goldmane--8f77d7b6c--522bq-eth0" May 17 00:09:58.629199 containerd[1482]: 2025-05-17 00:09:58.598 [INFO][4539] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali43538a0f6e5 ContainerID="1dc11b0228c72b6b08389d569cbeae584091380bf8a1bb8589e60e7058e83b73" Namespace="calico-system" Pod="goldmane-8f77d7b6c-522bq" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-goldmane--8f77d7b6c--522bq-eth0" May 17 00:09:58.629199 containerd[1482]: 2025-05-17 00:09:58.606 [INFO][4539] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1dc11b0228c72b6b08389d569cbeae584091380bf8a1bb8589e60e7058e83b73" Namespace="calico-system" Pod="goldmane-8f77d7b6c-522bq" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-goldmane--8f77d7b6c--522bq-eth0" May 17 00:09:58.629199 containerd[1482]: 2025-05-17 00:09:58.606 [INFO][4539] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1dc11b0228c72b6b08389d569cbeae584091380bf8a1bb8589e60e7058e83b73" Namespace="calico-system" Pod="goldmane-8f77d7b6c-522bq" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-goldmane--8f77d7b6c--522bq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e61ddff57a-k8s-goldmane--8f77d7b6c--522bq-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"c2a53ade-ef79-46ee-b2b9-636e3cc942be", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e61ddff57a", ContainerID:"1dc11b0228c72b6b08389d569cbeae584091380bf8a1bb8589e60e7058e83b73", Pod:"goldmane-8f77d7b6c-522bq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.22.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali43538a0f6e5", MAC:"5e:99:be:94:fa:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:09:58.629199 containerd[1482]: 2025-05-17 00:09:58.622 [INFO][4539] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1dc11b0228c72b6b08389d569cbeae584091380bf8a1bb8589e60e7058e83b73" Namespace="calico-system" Pod="goldmane-8f77d7b6c-522bq" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-goldmane--8f77d7b6c--522bq-eth0" May 17 00:09:58.673375 containerd[1482]: time="2025-05-17T00:09:58.673328575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75d5dfcff5-dhnhf,Uid:96303885-efee-4645-a557-34a808ca80dd,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5040c502408a5c3c469fff1f9e22c61b1c36fe2d19574300e62dc419109b5e70\"" May 17 00:09:58.676860 containerd[1482]: time="2025-05-17T00:09:58.676344033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:09:58.688126 containerd[1482]: time="2025-05-17T00:09:58.687072198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:58.688126 containerd[1482]: time="2025-05-17T00:09:58.687131839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:58.688126 containerd[1482]: time="2025-05-17T00:09:58.687143159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:58.688907 containerd[1482]: time="2025-05-17T00:09:58.688630308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:58.717810 systemd[1]: Started cri-containerd-1dc11b0228c72b6b08389d569cbeae584091380bf8a1bb8589e60e7058e83b73.scope - libcontainer container 1dc11b0228c72b6b08389d569cbeae584091380bf8a1bb8589e60e7058e83b73. May 17 00:09:58.764429 containerd[1482]: time="2025-05-17T00:09:58.764289796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-522bq,Uid:c2a53ade-ef79-46ee-b2b9-636e3cc942be,Namespace:calico-system,Attempt:1,} returns sandbox id \"1dc11b0228c72b6b08389d569cbeae584091380bf8a1bb8589e60e7058e83b73\"" May 17 00:09:59.103024 containerd[1482]: time="2025-05-17T00:09:59.101185750Z" level=info msg="StopPodSandbox for \"b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b\"" May 17 00:09:59.103024 containerd[1482]: time="2025-05-17T00:09:59.102471975Z" level=info msg="StopPodSandbox for \"e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830\"" May 17 00:09:59.233183 containerd[1482]: 2025-05-17 00:09:59.179 [INFO][4701] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" May 17 00:09:59.233183 containerd[1482]: 2025-05-17 00:09:59.181 [INFO][4701] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" iface="eth0" netns="/var/run/netns/cni-baf123fd-9e80-15ff-1200-44dcc13cda27" May 17 00:09:59.233183 containerd[1482]: 2025-05-17 00:09:59.182 [INFO][4701] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" iface="eth0" netns="/var/run/netns/cni-baf123fd-9e80-15ff-1200-44dcc13cda27" May 17 00:09:59.233183 containerd[1482]: 2025-05-17 00:09:59.183 [INFO][4701] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" iface="eth0" netns="/var/run/netns/cni-baf123fd-9e80-15ff-1200-44dcc13cda27" May 17 00:09:59.233183 containerd[1482]: 2025-05-17 00:09:59.183 [INFO][4701] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" May 17 00:09:59.233183 containerd[1482]: 2025-05-17 00:09:59.183 [INFO][4701] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" May 17 00:09:59.233183 containerd[1482]: 2025-05-17 00:09:59.211 [INFO][4715] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" HandleID="k8s-pod-network.e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--kube--controllers--b7fc8b74f--tr4ng-eth0" May 17 00:09:59.233183 containerd[1482]: 2025-05-17 00:09:59.211 [INFO][4715] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:09:59.233183 containerd[1482]: 2025-05-17 00:09:59.211 [INFO][4715] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:09:59.233183 containerd[1482]: 2025-05-17 00:09:59.227 [WARNING][4715] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" HandleID="k8s-pod-network.e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--kube--controllers--b7fc8b74f--tr4ng-eth0" May 17 00:09:59.233183 containerd[1482]: 2025-05-17 00:09:59.227 [INFO][4715] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" HandleID="k8s-pod-network.e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--kube--controllers--b7fc8b74f--tr4ng-eth0" May 17 00:09:59.233183 containerd[1482]: 2025-05-17 00:09:59.229 [INFO][4715] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:09:59.233183 containerd[1482]: 2025-05-17 00:09:59.231 [INFO][4701] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" May 17 00:09:59.233924 containerd[1482]: time="2025-05-17T00:09:59.233423114Z" level=info msg="TearDown network for sandbox \"e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830\" successfully" May 17 00:09:59.233924 containerd[1482]: time="2025-05-17T00:09:59.233454794Z" level=info msg="StopPodSandbox for \"e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830\" returns successfully" May 17 00:09:59.235579 containerd[1482]: time="2025-05-17T00:09:59.235406752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b7fc8b74f-tr4ng,Uid:b9fba4a9-b127-4dfe-8a91-d1118962d299,Namespace:calico-system,Attempt:1,}" May 17 00:09:59.243299 systemd[1]: run-netns-cni\x2dbaf123fd\x2d9e80\x2d15ff\x2d1200\x2d44dcc13cda27.mount: Deactivated successfully. May 17 00:09:59.254033 containerd[1482]: 2025-05-17 00:09:59.181 [INFO][4700] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" May 17 00:09:59.254033 containerd[1482]: 2025-05-17 00:09:59.183 [INFO][4700] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" iface="eth0" netns="/var/run/netns/cni-235caec6-e264-e294-5024-6c9c88a5c362" May 17 00:09:59.254033 containerd[1482]: 2025-05-17 00:09:59.183 [INFO][4700] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" iface="eth0" netns="/var/run/netns/cni-235caec6-e264-e294-5024-6c9c88a5c362" May 17 00:09:59.254033 containerd[1482]: 2025-05-17 00:09:59.184 [INFO][4700] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" iface="eth0" netns="/var/run/netns/cni-235caec6-e264-e294-5024-6c9c88a5c362" May 17 00:09:59.254033 containerd[1482]: 2025-05-17 00:09:59.184 [INFO][4700] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" May 17 00:09:59.254033 containerd[1482]: 2025-05-17 00:09:59.184 [INFO][4700] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" May 17 00:09:59.254033 containerd[1482]: 2025-05-17 00:09:59.222 [INFO][4720] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" HandleID="k8s-pod-network.b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" Workload="ci--4081--3--3--n--e61ddff57a-k8s-csi--node--driver--hhzhx-eth0" May 17 00:09:59.254033 containerd[1482]: 2025-05-17 00:09:59.223 [INFO][4720] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:09:59.254033 containerd[1482]: 2025-05-17 00:09:59.229 [INFO][4720] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:09:59.254033 containerd[1482]: 2025-05-17 00:09:59.245 [WARNING][4720] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" HandleID="k8s-pod-network.b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" Workload="ci--4081--3--3--n--e61ddff57a-k8s-csi--node--driver--hhzhx-eth0" May 17 00:09:59.254033 containerd[1482]: 2025-05-17 00:09:59.245 [INFO][4720] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" HandleID="k8s-pod-network.b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" Workload="ci--4081--3--3--n--e61ddff57a-k8s-csi--node--driver--hhzhx-eth0" May 17 00:09:59.254033 containerd[1482]: 2025-05-17 00:09:59.250 [INFO][4720] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:09:59.254033 containerd[1482]: 2025-05-17 00:09:59.251 [INFO][4700] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" May 17 00:09:59.256012 containerd[1482]: time="2025-05-17T00:09:59.255913750Z" level=info msg="TearDown network for sandbox \"b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b\" successfully" May 17 00:09:59.256012 containerd[1482]: time="2025-05-17T00:09:59.255957711Z" level=info msg="StopPodSandbox for \"b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b\" returns successfully" May 17 00:09:59.259982 containerd[1482]: time="2025-05-17T00:09:59.259672383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hhzhx,Uid:35348305-3030-4a37-b275-4a2201a5f384,Namespace:calico-system,Attempt:1,}" May 17 00:09:59.262108 systemd[1]: run-netns-cni\x2d235caec6\x2de264\x2de294\x2d5024\x2d6c9c88a5c362.mount: Deactivated successfully. May 17 00:09:59.482128 systemd-networkd[1384]: caliabffd988ed4: Link UP May 17 00:09:59.482825 systemd-networkd[1384]: caliabffd988ed4: Gained carrier May 17 00:09:59.510665 containerd[1482]: 2025-05-17 00:09:59.329 [INFO][4728] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--e61ddff57a-k8s-calico--kube--controllers--b7fc8b74f--tr4ng-eth0 calico-kube-controllers-b7fc8b74f- calico-system b9fba4a9-b127-4dfe-8a91-d1118962d299 959 0 2025-05-17 00:09:34 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:b7fc8b74f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-3-n-e61ddff57a calico-kube-controllers-b7fc8b74f-tr4ng eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliabffd988ed4 [] [] }} ContainerID="d41a6f92ee2bb260d50e241451bb67d468f06fdd990219c0b8d013eff6543ebb" Namespace="calico-system" Pod="calico-kube-controllers-b7fc8b74f-tr4ng" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-calico--kube--controllers--b7fc8b74f--tr4ng-" May 17 00:09:59.510665 containerd[1482]: 2025-05-17 00:09:59.329 [INFO][4728] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d41a6f92ee2bb260d50e241451bb67d468f06fdd990219c0b8d013eff6543ebb" Namespace="calico-system" Pod="calico-kube-controllers-b7fc8b74f-tr4ng" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-calico--kube--controllers--b7fc8b74f--tr4ng-eth0" May 17 00:09:59.510665 containerd[1482]: 2025-05-17 00:09:59.397 [INFO][4753] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d41a6f92ee2bb260d50e241451bb67d468f06fdd990219c0b8d013eff6543ebb" HandleID="k8s-pod-network.d41a6f92ee2bb260d50e241451bb67d468f06fdd990219c0b8d013eff6543ebb" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--kube--controllers--b7fc8b74f--tr4ng-eth0" May 17 00:09:59.510665 containerd[1482]: 2025-05-17 00:09:59.398 [INFO][4753] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d41a6f92ee2bb260d50e241451bb67d468f06fdd990219c0b8d013eff6543ebb" HandleID="k8s-pod-network.d41a6f92ee2bb260d50e241451bb67d468f06fdd990219c0b8d013eff6543ebb" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--kube--controllers--b7fc8b74f--tr4ng-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400022fd10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-e61ddff57a", "pod":"calico-kube-controllers-b7fc8b74f-tr4ng", "timestamp":"2025-05-17 00:09:59.397872462 +0000 UTC"}, Hostname:"ci-4081-3-3-n-e61ddff57a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:09:59.510665 containerd[1482]: 2025-05-17 00:09:59.398 [INFO][4753] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:09:59.510665 containerd[1482]: 2025-05-17 00:09:59.398 [INFO][4753] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:09:59.510665 containerd[1482]: 2025-05-17 00:09:59.398 [INFO][4753] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-e61ddff57a' May 17 00:09:59.510665 containerd[1482]: 2025-05-17 00:09:59.412 [INFO][4753] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d41a6f92ee2bb260d50e241451bb67d468f06fdd990219c0b8d013eff6543ebb" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:59.510665 containerd[1482]: 2025-05-17 00:09:59.421 [INFO][4753] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:59.510665 containerd[1482]: 2025-05-17 00:09:59.430 [INFO][4753] ipam/ipam.go 511: Trying affinity for 192.168.22.0/26 host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:59.510665 containerd[1482]: 2025-05-17 00:09:59.437 [INFO][4753] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.0/26 host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:59.510665 containerd[1482]: 2025-05-17 00:09:59.444 [INFO][4753] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.0/26 host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:59.510665 containerd[1482]: 2025-05-17 00:09:59.445 [INFO][4753] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.22.0/26 handle="k8s-pod-network.d41a6f92ee2bb260d50e241451bb67d468f06fdd990219c0b8d013eff6543ebb" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:59.510665 containerd[1482]: 2025-05-17 00:09:59.448 [INFO][4753] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d41a6f92ee2bb260d50e241451bb67d468f06fdd990219c0b8d013eff6543ebb May 17 00:09:59.510665 containerd[1482]: 2025-05-17 00:09:59.454 [INFO][4753] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.22.0/26 handle="k8s-pod-network.d41a6f92ee2bb260d50e241451bb67d468f06fdd990219c0b8d013eff6543ebb" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:59.510665 containerd[1482]: 2025-05-17 00:09:59.467 [INFO][4753] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.22.6/26] block=192.168.22.0/26 handle="k8s-pod-network.d41a6f92ee2bb260d50e241451bb67d468f06fdd990219c0b8d013eff6543ebb" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:59.510665 containerd[1482]: 2025-05-17 00:09:59.467 [INFO][4753] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.6/26] handle="k8s-pod-network.d41a6f92ee2bb260d50e241451bb67d468f06fdd990219c0b8d013eff6543ebb" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:59.510665 containerd[1482]: 2025-05-17 00:09:59.467 [INFO][4753] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:09:59.510665 containerd[1482]: 2025-05-17 00:09:59.467 [INFO][4753] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.22.6/26] IPv6=[] ContainerID="d41a6f92ee2bb260d50e241451bb67d468f06fdd990219c0b8d013eff6543ebb" HandleID="k8s-pod-network.d41a6f92ee2bb260d50e241451bb67d468f06fdd990219c0b8d013eff6543ebb" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--kube--controllers--b7fc8b74f--tr4ng-eth0" May 17 00:09:59.511686 containerd[1482]: 2025-05-17 00:09:59.475 [INFO][4728] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d41a6f92ee2bb260d50e241451bb67d468f06fdd990219c0b8d013eff6543ebb" Namespace="calico-system" Pod="calico-kube-controllers-b7fc8b74f-tr4ng" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-calico--kube--controllers--b7fc8b74f--tr4ng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e61ddff57a-k8s-calico--kube--controllers--b7fc8b74f--tr4ng-eth0", GenerateName:"calico-kube-controllers-b7fc8b74f-", Namespace:"calico-system", SelfLink:"", UID:"b9fba4a9-b127-4dfe-8a91-d1118962d299", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b7fc8b74f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e61ddff57a", ContainerID:"", Pod:"calico-kube-controllers-b7fc8b74f-tr4ng", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.22.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliabffd988ed4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:09:59.511686 containerd[1482]: 2025-05-17 00:09:59.475 [INFO][4728] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.6/32] ContainerID="d41a6f92ee2bb260d50e241451bb67d468f06fdd990219c0b8d013eff6543ebb" Namespace="calico-system" Pod="calico-kube-controllers-b7fc8b74f-tr4ng" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-calico--kube--controllers--b7fc8b74f--tr4ng-eth0" May 17 00:09:59.511686 containerd[1482]: 2025-05-17 00:09:59.475 [INFO][4728] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliabffd988ed4 ContainerID="d41a6f92ee2bb260d50e241451bb67d468f06fdd990219c0b8d013eff6543ebb" Namespace="calico-system" Pod="calico-kube-controllers-b7fc8b74f-tr4ng" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-calico--kube--controllers--b7fc8b74f--tr4ng-eth0" May 17 00:09:59.511686 containerd[1482]: 2025-05-17 00:09:59.485 [INFO][4728] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d41a6f92ee2bb260d50e241451bb67d468f06fdd990219c0b8d013eff6543ebb" Namespace="calico-system" Pod="calico-kube-controllers-b7fc8b74f-tr4ng" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-calico--kube--controllers--b7fc8b74f--tr4ng-eth0" May 17 00:09:59.511686 containerd[1482]: 2025-05-17 00:09:59.486 [INFO][4728] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d41a6f92ee2bb260d50e241451bb67d468f06fdd990219c0b8d013eff6543ebb" Namespace="calico-system" Pod="calico-kube-controllers-b7fc8b74f-tr4ng" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-calico--kube--controllers--b7fc8b74f--tr4ng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e61ddff57a-k8s-calico--kube--controllers--b7fc8b74f--tr4ng-eth0", GenerateName:"calico-kube-controllers-b7fc8b74f-", Namespace:"calico-system", SelfLink:"", UID:"b9fba4a9-b127-4dfe-8a91-d1118962d299", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b7fc8b74f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e61ddff57a", ContainerID:"d41a6f92ee2bb260d50e241451bb67d468f06fdd990219c0b8d013eff6543ebb", Pod:"calico-kube-controllers-b7fc8b74f-tr4ng", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.22.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliabffd988ed4", MAC:"c6:89:c3:90:d0:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:09:59.511686 containerd[1482]: 2025-05-17 00:09:59.506 [INFO][4728] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d41a6f92ee2bb260d50e241451bb67d468f06fdd990219c0b8d013eff6543ebb" Namespace="calico-system" Pod="calico-kube-controllers-b7fc8b74f-tr4ng" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-calico--kube--controllers--b7fc8b74f--tr4ng-eth0" May 17 00:09:59.547060 containerd[1482]: time="2025-05-17T00:09:59.546631826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:59.547060 containerd[1482]: time="2025-05-17T00:09:59.546719588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:59.547060 containerd[1482]: time="2025-05-17T00:09:59.546754269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:59.547060 containerd[1482]: time="2025-05-17T00:09:59.546876391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:59.572003 systemd[1]: Started cri-containerd-d41a6f92ee2bb260d50e241451bb67d468f06fdd990219c0b8d013eff6543ebb.scope - libcontainer container d41a6f92ee2bb260d50e241451bb67d468f06fdd990219c0b8d013eff6543ebb. May 17 00:09:59.581126 systemd-networkd[1384]: cali7a3cccafb70: Link UP May 17 00:09:59.584006 systemd-networkd[1384]: cali7a3cccafb70: Gained carrier May 17 00:09:59.614364 containerd[1482]: 2025-05-17 00:09:59.356 [INFO][4738] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--e61ddff57a-k8s-csi--node--driver--hhzhx-eth0 csi-node-driver- calico-system 35348305-3030-4a37-b275-4a2201a5f384 960 0 2025-05-17 00:09:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:68bf44dd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-3-n-e61ddff57a csi-node-driver-hhzhx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7a3cccafb70 [] [] }} ContainerID="11ea744b6a81a7c34ee26e1f9d5f37e5c5a189809d0335a885f8c253b9becf69" Namespace="calico-system" Pod="csi-node-driver-hhzhx" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-csi--node--driver--hhzhx-" May 17 00:09:59.614364 containerd[1482]: 2025-05-17 00:09:59.360 [INFO][4738] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="11ea744b6a81a7c34ee26e1f9d5f37e5c5a189809d0335a885f8c253b9becf69" Namespace="calico-system" Pod="csi-node-driver-hhzhx" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-csi--node--driver--hhzhx-eth0" May 17 00:09:59.614364 containerd[1482]: 2025-05-17 00:09:59.422 [INFO][4759] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="11ea744b6a81a7c34ee26e1f9d5f37e5c5a189809d0335a885f8c253b9becf69" HandleID="k8s-pod-network.11ea744b6a81a7c34ee26e1f9d5f37e5c5a189809d0335a885f8c253b9becf69" Workload="ci--4081--3--3--n--e61ddff57a-k8s-csi--node--driver--hhzhx-eth0" May 17 00:09:59.614364 containerd[1482]: 2025-05-17 00:09:59.423 [INFO][4759] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="11ea744b6a81a7c34ee26e1f9d5f37e5c5a189809d0335a885f8c253b9becf69" HandleID="k8s-pod-network.11ea744b6a81a7c34ee26e1f9d5f37e5c5a189809d0335a885f8c253b9becf69" Workload="ci--4081--3--3--n--e61ddff57a-k8s-csi--node--driver--hhzhx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400022f7e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-e61ddff57a", "pod":"csi-node-driver-hhzhx", "timestamp":"2025-05-17 00:09:59.422877787 +0000 UTC"}, Hostname:"ci-4081-3-3-n-e61ddff57a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:09:59.614364 containerd[1482]: 2025-05-17 00:09:59.423 [INFO][4759] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:09:59.614364 containerd[1482]: 2025-05-17 00:09:59.467 [INFO][4759] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:09:59.614364 containerd[1482]: 2025-05-17 00:09:59.467 [INFO][4759] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-e61ddff57a' May 17 00:09:59.614364 containerd[1482]: 2025-05-17 00:09:59.514 [INFO][4759] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.11ea744b6a81a7c34ee26e1f9d5f37e5c5a189809d0335a885f8c253b9becf69" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:59.614364 containerd[1482]: 2025-05-17 00:09:59.523 [INFO][4759] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:59.614364 containerd[1482]: 2025-05-17 00:09:59.533 [INFO][4759] ipam/ipam.go 511: Trying affinity for 192.168.22.0/26 host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:59.614364 containerd[1482]: 2025-05-17 00:09:59.539 [INFO][4759] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.0/26 host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:59.614364 containerd[1482]: 2025-05-17 00:09:59.543 [INFO][4759] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.0/26 host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:59.614364 containerd[1482]: 2025-05-17 00:09:59.543 [INFO][4759] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.22.0/26 handle="k8s-pod-network.11ea744b6a81a7c34ee26e1f9d5f37e5c5a189809d0335a885f8c253b9becf69" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:59.614364 containerd[1482]: 2025-05-17 00:09:59.546 [INFO][4759] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.11ea744b6a81a7c34ee26e1f9d5f37e5c5a189809d0335a885f8c253b9becf69 May 17 00:09:59.614364 containerd[1482]: 2025-05-17 00:09:59.554 [INFO][4759] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.22.0/26 handle="k8s-pod-network.11ea744b6a81a7c34ee26e1f9d5f37e5c5a189809d0335a885f8c253b9becf69" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:59.614364 containerd[1482]: 2025-05-17 00:09:59.569 [INFO][4759] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.22.7/26] block=192.168.22.0/26 handle="k8s-pod-network.11ea744b6a81a7c34ee26e1f9d5f37e5c5a189809d0335a885f8c253b9becf69" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:59.614364 containerd[1482]: 2025-05-17 00:09:59.570 [INFO][4759] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.7/26] handle="k8s-pod-network.11ea744b6a81a7c34ee26e1f9d5f37e5c5a189809d0335a885f8c253b9becf69" host="ci-4081-3-3-n-e61ddff57a" May 17 00:09:59.614364 containerd[1482]: 2025-05-17 00:09:59.570 [INFO][4759] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:09:59.614364 containerd[1482]: 2025-05-17 00:09:59.570 [INFO][4759] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.22.7/26] IPv6=[] ContainerID="11ea744b6a81a7c34ee26e1f9d5f37e5c5a189809d0335a885f8c253b9becf69" HandleID="k8s-pod-network.11ea744b6a81a7c34ee26e1f9d5f37e5c5a189809d0335a885f8c253b9becf69" Workload="ci--4081--3--3--n--e61ddff57a-k8s-csi--node--driver--hhzhx-eth0" May 17 00:09:59.616302 containerd[1482]: 2025-05-17 00:09:59.575 [INFO][4738] cni-plugin/k8s.go 418: Populated endpoint ContainerID="11ea744b6a81a7c34ee26e1f9d5f37e5c5a189809d0335a885f8c253b9becf69" Namespace="calico-system" Pod="csi-node-driver-hhzhx" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-csi--node--driver--hhzhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e61ddff57a-k8s-csi--node--driver--hhzhx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"35348305-3030-4a37-b275-4a2201a5f384", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e61ddff57a", ContainerID:"", Pod:"csi-node-driver-hhzhx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.22.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7a3cccafb70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:09:59.616302 containerd[1482]: 2025-05-17 00:09:59.575 [INFO][4738] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.7/32] ContainerID="11ea744b6a81a7c34ee26e1f9d5f37e5c5a189809d0335a885f8c253b9becf69" Namespace="calico-system" Pod="csi-node-driver-hhzhx" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-csi--node--driver--hhzhx-eth0" May 17 00:09:59.616302 containerd[1482]: 2025-05-17 00:09:59.575 [INFO][4738] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a3cccafb70 ContainerID="11ea744b6a81a7c34ee26e1f9d5f37e5c5a189809d0335a885f8c253b9becf69" Namespace="calico-system" Pod="csi-node-driver-hhzhx" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-csi--node--driver--hhzhx-eth0" May 17 00:09:59.616302 containerd[1482]: 2025-05-17 00:09:59.585 [INFO][4738] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="11ea744b6a81a7c34ee26e1f9d5f37e5c5a189809d0335a885f8c253b9becf69" Namespace="calico-system" Pod="csi-node-driver-hhzhx" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-csi--node--driver--hhzhx-eth0" May 17 00:09:59.616302 containerd[1482]: 2025-05-17 00:09:59.587 [INFO][4738] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="11ea744b6a81a7c34ee26e1f9d5f37e5c5a189809d0335a885f8c253b9becf69" Namespace="calico-system" Pod="csi-node-driver-hhzhx" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-csi--node--driver--hhzhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e61ddff57a-k8s-csi--node--driver--hhzhx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"35348305-3030-4a37-b275-4a2201a5f384", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e61ddff57a", ContainerID:"11ea744b6a81a7c34ee26e1f9d5f37e5c5a189809d0335a885f8c253b9becf69", Pod:"csi-node-driver-hhzhx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.22.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7a3cccafb70", MAC:"8e:b5:88:90:a2:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:09:59.616302 containerd[1482]: 2025-05-17 00:09:59.609 [INFO][4738] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="11ea744b6a81a7c34ee26e1f9d5f37e5c5a189809d0335a885f8c253b9becf69" Namespace="calico-system" Pod="csi-node-driver-hhzhx" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-csi--node--driver--hhzhx-eth0" May 17 00:09:59.644009 containerd[1482]: time="2025-05-17T00:09:59.643666268Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:59.645740 containerd[1482]: time="2025-05-17T00:09:59.645660346Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:59.646136 containerd[1482]: time="2025-05-17T00:09:59.645875751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:59.646136 containerd[1482]: time="2025-05-17T00:09:59.646059914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:59.663876 containerd[1482]: time="2025-05-17T00:09:59.663813778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b7fc8b74f-tr4ng,Uid:b9fba4a9-b127-4dfe-8a91-d1118962d299,Namespace:calico-system,Attempt:1,} returns sandbox id \"d41a6f92ee2bb260d50e241451bb67d468f06fdd990219c0b8d013eff6543ebb\"" May 17 00:09:59.685426 systemd[1]: Started cri-containerd-11ea744b6a81a7c34ee26e1f9d5f37e5c5a189809d0335a885f8c253b9becf69.scope - libcontainer container 11ea744b6a81a7c34ee26e1f9d5f37e5c5a189809d0335a885f8c253b9becf69. May 17 00:09:59.715688 containerd[1482]: time="2025-05-17T00:09:59.715443979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hhzhx,Uid:35348305-3030-4a37-b275-4a2201a5f384,Namespace:calico-system,Attempt:1,} returns sandbox id \"11ea744b6a81a7c34ee26e1f9d5f37e5c5a189809d0335a885f8c253b9becf69\"" May 17 00:10:00.099780 containerd[1482]: time="2025-05-17T00:10:00.099365686Z" level=info msg="StopPodSandbox for \"2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8\"" May 17 00:10:00.102298 systemd-networkd[1384]: cali8a9886a35df: Gained IPv6LL May 17 00:10:00.227693 containerd[1482]: 2025-05-17 00:10:00.170 [INFO][4877] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" May 17 00:10:00.227693 containerd[1482]: 2025-05-17 00:10:00.171 [INFO][4877] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" iface="eth0" netns="/var/run/netns/cni-5e58cc44-fe8b-9528-3781-3771b9ced932" May 17 00:10:00.227693 containerd[1482]: 2025-05-17 00:10:00.172 [INFO][4877] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" iface="eth0" netns="/var/run/netns/cni-5e58cc44-fe8b-9528-3781-3771b9ced932" May 17 00:10:00.227693 containerd[1482]: 2025-05-17 00:10:00.172 [INFO][4877] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" iface="eth0" netns="/var/run/netns/cni-5e58cc44-fe8b-9528-3781-3771b9ced932" May 17 00:10:00.227693 containerd[1482]: 2025-05-17 00:10:00.172 [INFO][4877] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" May 17 00:10:00.227693 containerd[1482]: 2025-05-17 00:10:00.172 [INFO][4877] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" May 17 00:10:00.227693 containerd[1482]: 2025-05-17 00:10:00.208 [INFO][4884] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" HandleID="k8s-pod-network.2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--cq6gh-eth0" May 17 00:10:00.227693 containerd[1482]: 2025-05-17 00:10:00.208 [INFO][4884] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:00.227693 containerd[1482]: 2025-05-17 00:10:00.208 [INFO][4884] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:00.227693 containerd[1482]: 2025-05-17 00:10:00.221 [WARNING][4884] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" HandleID="k8s-pod-network.2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--cq6gh-eth0" May 17 00:10:00.227693 containerd[1482]: 2025-05-17 00:10:00.221 [INFO][4884] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" HandleID="k8s-pod-network.2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--cq6gh-eth0" May 17 00:10:00.227693 containerd[1482]: 2025-05-17 00:10:00.224 [INFO][4884] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:00.227693 containerd[1482]: 2025-05-17 00:10:00.225 [INFO][4877] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" May 17 00:10:00.229028 containerd[1482]: time="2025-05-17T00:10:00.227900728Z" level=info msg="TearDown network for sandbox \"2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8\" successfully" May 17 00:10:00.229028 containerd[1482]: time="2025-05-17T00:10:00.227931249Z" level=info msg="StopPodSandbox for \"2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8\" returns successfully" May 17 00:10:00.229028 containerd[1482]: time="2025-05-17T00:10:00.228690304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75d5dfcff5-cq6gh,Uid:5f0c7770-d646-424e-9f31-ff80f202f1a8,Namespace:calico-apiserver,Attempt:1,}" May 17 00:10:00.248411 systemd[1]: run-netns-cni\x2d5e58cc44\x2dfe8b\x2d9528\x2d3781\x2d3771b9ced932.mount: Deactivated successfully. May 17 00:10:00.420213 systemd-networkd[1384]: cali254821d9f24: Link UP May 17 00:10:00.424259 systemd-networkd[1384]: cali254821d9f24: Gained carrier May 17 00:10:00.447808 containerd[1482]: 2025-05-17 00:10:00.305 [INFO][4892] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--cq6gh-eth0 calico-apiserver-75d5dfcff5- calico-apiserver 5f0c7770-d646-424e-9f31-ff80f202f1a8 975 0 2025-05-17 00:09:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:75d5dfcff5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-n-e61ddff57a calico-apiserver-75d5dfcff5-cq6gh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali254821d9f24 [] [] }} ContainerID="e8c61ea0f67fb1bd6602c6bab54aad1c55958fa13c29d8f02710564364d597ea" Namespace="calico-apiserver" Pod="calico-apiserver-75d5dfcff5-cq6gh" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--cq6gh-" May 17 00:10:00.447808 containerd[1482]: 2025-05-17 00:10:00.305 [INFO][4892] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e8c61ea0f67fb1bd6602c6bab54aad1c55958fa13c29d8f02710564364d597ea" Namespace="calico-apiserver" Pod="calico-apiserver-75d5dfcff5-cq6gh" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--cq6gh-eth0" May 17 00:10:00.447808 containerd[1482]: 2025-05-17 00:10:00.340 [INFO][4903] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e8c61ea0f67fb1bd6602c6bab54aad1c55958fa13c29d8f02710564364d597ea" HandleID="k8s-pod-network.e8c61ea0f67fb1bd6602c6bab54aad1c55958fa13c29d8f02710564364d597ea" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--cq6gh-eth0" May 17 00:10:00.447808 containerd[1482]: 2025-05-17 00:10:00.340 [INFO][4903] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e8c61ea0f67fb1bd6602c6bab54aad1c55958fa13c29d8f02710564364d597ea" HandleID="k8s-pod-network.e8c61ea0f67fb1bd6602c6bab54aad1c55958fa13c29d8f02710564364d597ea" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--cq6gh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d7e30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-n-e61ddff57a", "pod":"calico-apiserver-75d5dfcff5-cq6gh", "timestamp":"2025-05-17 00:10:00.340347495 +0000 UTC"}, Hostname:"ci-4081-3-3-n-e61ddff57a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:10:00.447808 containerd[1482]: 2025-05-17 00:10:00.340 [INFO][4903] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:00.447808 containerd[1482]: 2025-05-17 00:10:00.341 [INFO][4903] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:00.447808 containerd[1482]: 2025-05-17 00:10:00.342 [INFO][4903] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-e61ddff57a' May 17 00:10:00.447808 containerd[1482]: 2025-05-17 00:10:00.361 [INFO][4903] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e8c61ea0f67fb1bd6602c6bab54aad1c55958fa13c29d8f02710564364d597ea" host="ci-4081-3-3-n-e61ddff57a" May 17 00:10:00.447808 containerd[1482]: 2025-05-17 00:10:00.370 [INFO][4903] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-e61ddff57a" May 17 00:10:00.447808 containerd[1482]: 2025-05-17 00:10:00.376 [INFO][4903] ipam/ipam.go 511: Trying affinity for 192.168.22.0/26 host="ci-4081-3-3-n-e61ddff57a" May 17 00:10:00.447808 containerd[1482]: 2025-05-17 00:10:00.381 [INFO][4903] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.0/26 host="ci-4081-3-3-n-e61ddff57a" May 17 00:10:00.447808 containerd[1482]: 2025-05-17 00:10:00.384 [INFO][4903] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.0/26 host="ci-4081-3-3-n-e61ddff57a" May 17 00:10:00.447808 containerd[1482]: 2025-05-17 00:10:00.385 [INFO][4903] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.22.0/26 handle="k8s-pod-network.e8c61ea0f67fb1bd6602c6bab54aad1c55958fa13c29d8f02710564364d597ea" host="ci-4081-3-3-n-e61ddff57a" May 17 00:10:00.447808 containerd[1482]: 2025-05-17 00:10:00.387 [INFO][4903] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e8c61ea0f67fb1bd6602c6bab54aad1c55958fa13c29d8f02710564364d597ea May 17 00:10:00.447808 containerd[1482]: 2025-05-17 00:10:00.394 [INFO][4903] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.22.0/26 handle="k8s-pod-network.e8c61ea0f67fb1bd6602c6bab54aad1c55958fa13c29d8f02710564364d597ea" host="ci-4081-3-3-n-e61ddff57a" May 17 00:10:00.447808 containerd[1482]: 2025-05-17 00:10:00.408 [INFO][4903] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.22.8/26] block=192.168.22.0/26 handle="k8s-pod-network.e8c61ea0f67fb1bd6602c6bab54aad1c55958fa13c29d8f02710564364d597ea" host="ci-4081-3-3-n-e61ddff57a" May 17 00:10:00.447808 containerd[1482]: 2025-05-17 00:10:00.408 [INFO][4903] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.8/26] handle="k8s-pod-network.e8c61ea0f67fb1bd6602c6bab54aad1c55958fa13c29d8f02710564364d597ea" host="ci-4081-3-3-n-e61ddff57a" May 17 00:10:00.447808 containerd[1482]: 2025-05-17 00:10:00.408 [INFO][4903] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:00.447808 containerd[1482]: 2025-05-17 00:10:00.408 [INFO][4903] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.22.8/26] IPv6=[] ContainerID="e8c61ea0f67fb1bd6602c6bab54aad1c55958fa13c29d8f02710564364d597ea" HandleID="k8s-pod-network.e8c61ea0f67fb1bd6602c6bab54aad1c55958fa13c29d8f02710564364d597ea" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--cq6gh-eth0" May 17 00:10:00.450233 containerd[1482]: 2025-05-17 00:10:00.412 [INFO][4892] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e8c61ea0f67fb1bd6602c6bab54aad1c55958fa13c29d8f02710564364d597ea" Namespace="calico-apiserver" Pod="calico-apiserver-75d5dfcff5-cq6gh" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--cq6gh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--cq6gh-eth0", GenerateName:"calico-apiserver-75d5dfcff5-", Namespace:"calico-apiserver", SelfLink:"", UID:"5f0c7770-d646-424e-9f31-ff80f202f1a8", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75d5dfcff5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e61ddff57a", ContainerID:"", Pod:"calico-apiserver-75d5dfcff5-cq6gh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali254821d9f24", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:00.450233 containerd[1482]: 2025-05-17 00:10:00.413 [INFO][4892] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.8/32] ContainerID="e8c61ea0f67fb1bd6602c6bab54aad1c55958fa13c29d8f02710564364d597ea" Namespace="calico-apiserver" Pod="calico-apiserver-75d5dfcff5-cq6gh" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--cq6gh-eth0" May 17 00:10:00.450233 containerd[1482]: 2025-05-17 00:10:00.413 [INFO][4892] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali254821d9f24 ContainerID="e8c61ea0f67fb1bd6602c6bab54aad1c55958fa13c29d8f02710564364d597ea" Namespace="calico-apiserver" Pod="calico-apiserver-75d5dfcff5-cq6gh" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--cq6gh-eth0" May 17 00:10:00.450233 containerd[1482]: 2025-05-17 00:10:00.422 [INFO][4892] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e8c61ea0f67fb1bd6602c6bab54aad1c55958fa13c29d8f02710564364d597ea" Namespace="calico-apiserver" Pod="calico-apiserver-75d5dfcff5-cq6gh" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--cq6gh-eth0" May 17 00:10:00.450233 containerd[1482]: 2025-05-17 00:10:00.426 [INFO][4892] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e8c61ea0f67fb1bd6602c6bab54aad1c55958fa13c29d8f02710564364d597ea" Namespace="calico-apiserver" Pod="calico-apiserver-75d5dfcff5-cq6gh" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--cq6gh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--cq6gh-eth0", GenerateName:"calico-apiserver-75d5dfcff5-", Namespace:"calico-apiserver", SelfLink:"", UID:"5f0c7770-d646-424e-9f31-ff80f202f1a8", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75d5dfcff5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e61ddff57a", ContainerID:"e8c61ea0f67fb1bd6602c6bab54aad1c55958fa13c29d8f02710564364d597ea", Pod:"calico-apiserver-75d5dfcff5-cq6gh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali254821d9f24", MAC:"42:f2:39:af:8b:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:00.450233 containerd[1482]: 2025-05-17 00:10:00.442 [INFO][4892] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e8c61ea0f67fb1bd6602c6bab54aad1c55958fa13c29d8f02710564364d597ea" Namespace="calico-apiserver" Pod="calico-apiserver-75d5dfcff5-cq6gh" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--cq6gh-eth0" May 17 00:10:00.492977 containerd[1482]: time="2025-05-17T00:10:00.492415840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:10:00.492977 containerd[1482]: time="2025-05-17T00:10:00.492493041Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:10:00.492977 containerd[1482]: time="2025-05-17T00:10:00.492512161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:10:00.492977 containerd[1482]: time="2025-05-17T00:10:00.492616284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:10:00.536045 systemd[1]: Started cri-containerd-e8c61ea0f67fb1bd6602c6bab54aad1c55958fa13c29d8f02710564364d597ea.scope - libcontainer container e8c61ea0f67fb1bd6602c6bab54aad1c55958fa13c29d8f02710564364d597ea. May 17 00:10:00.598413 containerd[1482]: time="2025-05-17T00:10:00.598375599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75d5dfcff5-cq6gh,Uid:5f0c7770-d646-424e-9f31-ff80f202f1a8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e8c61ea0f67fb1bd6602c6bab54aad1c55958fa13c29d8f02710564364d597ea\"" May 17 00:10:00.613922 systemd-networkd[1384]: cali43538a0f6e5: Gained IPv6LL May 17 00:10:00.741984 systemd-networkd[1384]: cali7a3cccafb70: Gained IPv6LL May 17 00:10:01.063439 systemd-networkd[1384]: caliabffd988ed4: Gained IPv6LL May 17 00:10:01.237038 containerd[1482]: time="2025-05-17T00:10:01.236975066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:10:01.239679 containerd[1482]: time="2025-05-17T00:10:01.239635359Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=44453213" May 17 00:10:01.242043 containerd[1482]: time="2025-05-17T00:10:01.241967925Z" level=info msg="ImageCreate event name:\"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:10:01.245573 containerd[1482]: time="2025-05-17T00:10:01.245450114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:10:01.247329 containerd[1482]: time="2025-05-17T00:10:01.247198749Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"45822470\" in 2.570810235s" May 17 00:10:01.247329 containerd[1482]: time="2025-05-17T00:10:01.247237549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\"" May 17 00:10:01.251608 containerd[1482]: time="2025-05-17T00:10:01.251550435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:10:01.252555 containerd[1482]: time="2025-05-17T00:10:01.252333531Z" level=info msg="CreateContainer within sandbox \"5040c502408a5c3c469fff1f9e22c61b1c36fe2d19574300e62dc419109b5e70\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:10:01.268601 containerd[1482]: time="2025-05-17T00:10:01.268432250Z" level=info msg="CreateContainer within sandbox \"5040c502408a5c3c469fff1f9e22c61b1c36fe2d19574300e62dc419109b5e70\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3d4a9b88d29d4635456352dfa7d2fc07a77198feba7a1e52038b2f464d0e8f05\"" May 17 00:10:01.270552 containerd[1482]: time="2025-05-17T00:10:01.269324268Z" level=info msg="StartContainer for \"3d4a9b88d29d4635456352dfa7d2fc07a77198feba7a1e52038b2f464d0e8f05\"" May 17 00:10:01.320034 systemd[1]: Started cri-containerd-3d4a9b88d29d4635456352dfa7d2fc07a77198feba7a1e52038b2f464d0e8f05.scope - libcontainer container 3d4a9b88d29d4635456352dfa7d2fc07a77198feba7a1e52038b2f464d0e8f05. May 17 00:10:01.363521 containerd[1482]: time="2025-05-17T00:10:01.363414656Z" level=info msg="StartContainer for \"3d4a9b88d29d4635456352dfa7d2fc07a77198feba7a1e52038b2f464d0e8f05\" returns successfully" May 17 00:10:01.499125 containerd[1482]: time="2025-05-17T00:10:01.499051549Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:10:01.500975 containerd[1482]: time="2025-05-17T00:10:01.500926706Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:10:01.501599 containerd[1482]: time="2025-05-17T00:10:01.501096590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:10:01.501676 kubelet[2692]: E0517 00:10:01.501272 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:10:01.501676 kubelet[2692]: E0517 00:10:01.501326 2692 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:10:01.503222 containerd[1482]: time="2025-05-17T00:10:01.502393776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 17 00:10:01.506687 kubelet[2692]: E0517 00:10:01.505352 2692 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ttgj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-522bq_calico-system(c2a53ade-ef79-46ee-b2b9-636e3cc942be): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:10:01.508535 kubelet[2692]: E0517 00:10:01.508494 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-522bq" podUID="c2a53ade-ef79-46ee-b2b9-636e3cc942be" May 17 00:10:01.573888 systemd-networkd[1384]: cali254821d9f24: Gained IPv6LL May 17 00:10:02.471866 kubelet[2692]: I0517 00:10:02.471280 2692 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:10:02.473029 kubelet[2692]: E0517 00:10:02.472887 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-522bq" podUID="c2a53ade-ef79-46ee-b2b9-636e3cc942be" May 17 00:10:02.492488 kubelet[2692]: I0517 00:10:02.490467 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-75d5dfcff5-dhnhf" podStartSLOduration=31.917019695 podStartE2EDuration="34.490445702s" podCreationTimestamp="2025-05-17 00:09:28 +0000 UTC" firstStartedPulling="2025-05-17 00:09:58.675854543 +0000 UTC m=+47.722053866" lastFinishedPulling="2025-05-17 00:10:01.24928051 +0000 UTC m=+50.295479873" observedRunningTime="2025-05-17 00:10:01.486969829 +0000 UTC m=+50.533169192" watchObservedRunningTime="2025-05-17 00:10:02.490445702 +0000 UTC m=+51.536645065" May 17 00:10:05.189026 systemd[1]: run-containerd-runc-k8s.io-bf0b7430691e09f5e3c20f95541c3cd5f64cc64772fae6ae78735c168d23caf6-runc.PCkmK3.mount: Deactivated successfully. May 17 00:10:06.053332 containerd[1482]: time="2025-05-17T00:10:06.053210421Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:10:06.054489 containerd[1482]: time="2025-05-17T00:10:06.054277803Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=48045219" May 17 00:10:06.055302 containerd[1482]: time="2025-05-17T00:10:06.055231743Z" level=info msg="ImageCreate event name:\"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:10:06.058189 containerd[1482]: time="2025-05-17T00:10:06.057934800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:10:06.058931 containerd[1482]: time="2025-05-17T00:10:06.058896500Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"49414428\" in 4.556466564s" May 17 00:10:06.059043 containerd[1482]: time="2025-05-17T00:10:06.059026623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\"" May 17 00:10:06.060921 containerd[1482]: time="2025-05-17T00:10:06.060495853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 00:10:06.078233 containerd[1482]: time="2025-05-17T00:10:06.078187023Z" level=info msg="CreateContainer within sandbox \"d41a6f92ee2bb260d50e241451bb67d468f06fdd990219c0b8d013eff6543ebb\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 17 00:10:06.098628 containerd[1482]: time="2025-05-17T00:10:06.097985997Z" level=info msg="CreateContainer within sandbox \"d41a6f92ee2bb260d50e241451bb67d468f06fdd990219c0b8d013eff6543ebb\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1a167093409b134776f3b18cce3f7eab0acbe6d5aa45db92c1b7d79ff8c91d29\"" May 17 00:10:06.098780 containerd[1482]: time="2025-05-17T00:10:06.098678131Z" level=info msg="StartContainer for \"1a167093409b134776f3b18cce3f7eab0acbe6d5aa45db92c1b7d79ff8c91d29\"" May 17 00:10:06.126035 systemd[1]: Started cri-containerd-1a167093409b134776f3b18cce3f7eab0acbe6d5aa45db92c1b7d79ff8c91d29.scope - libcontainer container 1a167093409b134776f3b18cce3f7eab0acbe6d5aa45db92c1b7d79ff8c91d29. May 17 00:10:06.173146 containerd[1482]: time="2025-05-17T00:10:06.173011005Z" level=info msg="StartContainer for \"1a167093409b134776f3b18cce3f7eab0acbe6d5aa45db92c1b7d79ff8c91d29\" returns successfully" May 17 00:10:06.507037 kubelet[2692]: I0517 00:10:06.506945 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-b7fc8b74f-tr4ng" podStartSLOduration=26.114861599 podStartE2EDuration="32.506921784s" podCreationTimestamp="2025-05-17 00:09:34 +0000 UTC" firstStartedPulling="2025-05-17 00:09:59.66804338 +0000 UTC m=+48.714242743" lastFinishedPulling="2025-05-17 00:10:06.060103605 +0000 UTC m=+55.106302928" observedRunningTime="2025-05-17 00:10:06.506031365 +0000 UTC m=+55.552230848" watchObservedRunningTime="2025-05-17 00:10:06.506921784 +0000 UTC m=+55.553121147" May 17 00:10:06.517269 systemd[1]: run-containerd-runc-k8s.io-1a167093409b134776f3b18cce3f7eab0acbe6d5aa45db92c1b7d79ff8c91d29-runc.oRMInN.mount: Deactivated successfully. May 17 00:10:07.384906 containerd[1482]: time="2025-05-17T00:10:07.384800044Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:10:07.390460 containerd[1482]: time="2025-05-17T00:10:07.390356601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8226240" May 17 00:10:07.394277 containerd[1482]: time="2025-05-17T00:10:07.394007998Z" level=info msg="ImageCreate event name:\"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:10:07.397563 containerd[1482]: time="2025-05-17T00:10:07.397472591Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:10:07.398805 containerd[1482]: time="2025-05-17T00:10:07.398303289Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"9595481\" in 1.337769114s" May 17 00:10:07.398805 containerd[1482]: time="2025-05-17T00:10:07.398347890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\"" May 17 00:10:07.400511 containerd[1482]: time="2025-05-17T00:10:07.400279610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:10:07.402114 containerd[1482]: time="2025-05-17T00:10:07.401494316Z" level=info msg="CreateContainer within sandbox \"11ea744b6a81a7c34ee26e1f9d5f37e5c5a189809d0335a885f8c253b9becf69\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 00:10:07.462221 containerd[1482]: time="2025-05-17T00:10:07.462170276Z" level=info msg="CreateContainer within sandbox \"11ea744b6a81a7c34ee26e1f9d5f37e5c5a189809d0335a885f8c253b9becf69\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"29b1810fa2aac5f565f643e0af55fc2a9b18b275527e3e0b4a73477271d88be5\"" May 17 00:10:07.464584 containerd[1482]: time="2025-05-17T00:10:07.463056374Z" level=info msg="StartContainer for \"29b1810fa2aac5f565f643e0af55fc2a9b18b275527e3e0b4a73477271d88be5\"" May 17 00:10:07.511000 systemd[1]: Started cri-containerd-29b1810fa2aac5f565f643e0af55fc2a9b18b275527e3e0b4a73477271d88be5.scope - libcontainer container 29b1810fa2aac5f565f643e0af55fc2a9b18b275527e3e0b4a73477271d88be5. May 17 00:10:07.549566 containerd[1482]: time="2025-05-17T00:10:07.549447156Z" level=info msg="StartContainer for \"29b1810fa2aac5f565f643e0af55fc2a9b18b275527e3e0b4a73477271d88be5\" returns successfully" May 17 00:10:07.807688 containerd[1482]: time="2025-05-17T00:10:07.807526319Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:10:07.809994 containerd[1482]: time="2025-05-17T00:10:07.809875249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 17 00:10:07.812666 containerd[1482]: time="2025-05-17T00:10:07.812518984Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"45822470\" in 412.190213ms" May 17 00:10:07.812666 containerd[1482]: time="2025-05-17T00:10:07.812569305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\"" May 17 00:10:07.815115 containerd[1482]: time="2025-05-17T00:10:07.814716831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:10:07.817537 containerd[1482]: time="2025-05-17T00:10:07.817385967Z" level=info msg="CreateContainer within sandbox \"e8c61ea0f67fb1bd6602c6bab54aad1c55958fa13c29d8f02710564364d597ea\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:10:07.836365 containerd[1482]: time="2025-05-17T00:10:07.836180523Z" level=info msg="CreateContainer within sandbox \"e8c61ea0f67fb1bd6602c6bab54aad1c55958fa13c29d8f02710564364d597ea\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e8354f23a5a0d7007b2a8626a8506825040c3c6b5b44b5d5da82f922c8115ef9\"" May 17 00:10:07.837277 containerd[1482]: time="2025-05-17T00:10:07.837226225Z" level=info msg="StartContainer for \"e8354f23a5a0d7007b2a8626a8506825040c3c6b5b44b5d5da82f922c8115ef9\"" May 17 00:10:07.874003 systemd[1]: Started cri-containerd-e8354f23a5a0d7007b2a8626a8506825040c3c6b5b44b5d5da82f922c8115ef9.scope - libcontainer container e8354f23a5a0d7007b2a8626a8506825040c3c6b5b44b5d5da82f922c8115ef9. May 17 00:10:07.919468 containerd[1482]: time="2025-05-17T00:10:07.919392918Z" level=info msg="StartContainer for \"e8354f23a5a0d7007b2a8626a8506825040c3c6b5b44b5d5da82f922c8115ef9\" returns successfully" May 17 00:10:08.062828 containerd[1482]: time="2025-05-17T00:10:08.062584389Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:10:08.063836 containerd[1482]: time="2025-05-17T00:10:08.063787015Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:10:08.064192 containerd[1482]: time="2025-05-17T00:10:08.063927458Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:10:08.066252 kubelet[2692]: E0517 00:10:08.064372 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:10:08.066252 kubelet[2692]: E0517 00:10:08.064444 2692 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:10:08.066252 kubelet[2692]: E0517 00:10:08.064703 2692 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d2c11851cda5488fbb5d694e1e602685,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xdj9s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66d874b469-vd68m_calico-system(0d2cf33f-bbcd-48f5-a11a-d546875d4c3f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:10:08.069361 containerd[1482]: time="2025-05-17T00:10:08.068066106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 00:10:09.507797 kubelet[2692]: I0517 00:10:09.506111 2692 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:10:09.620061 containerd[1482]: time="2025-05-17T00:10:09.620015152Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:10:09.622327 containerd[1482]: time="2025-05-17T00:10:09.622294721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=13749925" May 17 00:10:09.623523 containerd[1482]: time="2025-05-17T00:10:09.623492306Z" level=info msg="ImageCreate event name:\"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:10:09.626244 containerd[1482]: time="2025-05-17T00:10:09.626214685Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:10:09.628357 containerd[1482]: time="2025-05-17T00:10:09.628318930Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"15119118\" in 1.560203223s" May 17 00:10:09.628503 containerd[1482]: time="2025-05-17T00:10:09.628487454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\"" May 17 00:10:09.630659 containerd[1482]: time="2025-05-17T00:10:09.630610699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:10:09.634254 containerd[1482]: time="2025-05-17T00:10:09.633837488Z" level=info msg="CreateContainer within sandbox \"11ea744b6a81a7c34ee26e1f9d5f37e5c5a189809d0335a885f8c253b9becf69\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 00:10:09.676366 containerd[1482]: time="2025-05-17T00:10:09.676300759Z" level=info msg="CreateContainer within sandbox \"11ea744b6a81a7c34ee26e1f9d5f37e5c5a189809d0335a885f8c253b9becf69\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7192b2650b5497386d90c2ab678b31a0e4b03b92e6480f467e1bec61c14ae848\"" May 17 00:10:09.678044 containerd[1482]: time="2025-05-17T00:10:09.677998196Z" level=info msg="StartContainer for \"7192b2650b5497386d90c2ab678b31a0e4b03b92e6480f467e1bec61c14ae848\"" May 17 00:10:09.718372 systemd[1]: Started cri-containerd-7192b2650b5497386d90c2ab678b31a0e4b03b92e6480f467e1bec61c14ae848.scope - libcontainer container 7192b2650b5497386d90c2ab678b31a0e4b03b92e6480f467e1bec61c14ae848. May 17 00:10:09.756670 containerd[1482]: time="2025-05-17T00:10:09.756597682Z" level=info msg="StartContainer for \"7192b2650b5497386d90c2ab678b31a0e4b03b92e6480f467e1bec61c14ae848\" returns successfully" May 17 00:10:09.872290 containerd[1482]: time="2025-05-17T00:10:09.872138800Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:10:09.873595 containerd[1482]: time="2025-05-17T00:10:09.873541870Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:10:09.873892 containerd[1482]: time="2025-05-17T00:10:09.873685233Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:10:09.874324 kubelet[2692]: E0517 00:10:09.874174 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:10:09.874483 kubelet[2692]: E0517 00:10:09.874315 2692 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:10:09.876532 kubelet[2692]: E0517 00:10:09.874505 2692 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xdj9s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66d874b469-vd68m_calico-system(0d2cf33f-bbcd-48f5-a11a-d546875d4c3f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:10:09.876987 kubelet[2692]: E0517 00:10:09.876925 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-66d874b469-vd68m" podUID="0d2cf33f-bbcd-48f5-a11a-d546875d4c3f" May 17 00:10:10.268985 kubelet[2692]: I0517 00:10:10.268756 2692 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 00:10:10.273099 kubelet[2692]: I0517 00:10:10.272944 2692 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 00:10:10.532222 kubelet[2692]: I0517 00:10:10.531971 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-75d5dfcff5-cq6gh" podStartSLOduration=35.318587115 podStartE2EDuration="42.531901245s" podCreationTimestamp="2025-05-17 00:09:28 +0000 UTC" firstStartedPulling="2025-05-17 00:10:00.600140874 +0000 UTC m=+49.646340237" lastFinishedPulling="2025-05-17 00:10:07.813455004 +0000 UTC m=+56.859654367" observedRunningTime="2025-05-17 00:10:08.523595957 +0000 UTC m=+57.569795320" watchObservedRunningTime="2025-05-17 00:10:10.531901245 +0000 UTC m=+59.578100608" May 17 00:10:11.059356 containerd[1482]: time="2025-05-17T00:10:11.059298019Z" level=info msg="StopPodSandbox for \"e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830\"" May 17 00:10:11.171991 containerd[1482]: 2025-05-17 00:10:11.109 [WARNING][5252] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e61ddff57a-k8s-calico--kube--controllers--b7fc8b74f--tr4ng-eth0", GenerateName:"calico-kube-controllers-b7fc8b74f-", Namespace:"calico-system", SelfLink:"", UID:"b9fba4a9-b127-4dfe-8a91-d1118962d299", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b7fc8b74f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e61ddff57a", ContainerID:"d41a6f92ee2bb260d50e241451bb67d468f06fdd990219c0b8d013eff6543ebb", Pod:"calico-kube-controllers-b7fc8b74f-tr4ng", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.22.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliabffd988ed4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:11.171991 containerd[1482]: 2025-05-17 00:10:11.109 [INFO][5252] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" May 17 00:10:11.171991 containerd[1482]: 2025-05-17 00:10:11.109 [INFO][5252] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" iface="eth0" netns="" May 17 00:10:11.171991 containerd[1482]: 2025-05-17 00:10:11.109 [INFO][5252] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" May 17 00:10:11.171991 containerd[1482]: 2025-05-17 00:10:11.109 [INFO][5252] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" May 17 00:10:11.171991 containerd[1482]: 2025-05-17 00:10:11.155 [INFO][5261] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" HandleID="k8s-pod-network.e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--kube--controllers--b7fc8b74f--tr4ng-eth0" May 17 00:10:11.171991 containerd[1482]: 2025-05-17 00:10:11.155 [INFO][5261] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:11.171991 containerd[1482]: 2025-05-17 00:10:11.155 [INFO][5261] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:11.171991 containerd[1482]: 2025-05-17 00:10:11.165 [WARNING][5261] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" HandleID="k8s-pod-network.e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--kube--controllers--b7fc8b74f--tr4ng-eth0" May 17 00:10:11.171991 containerd[1482]: 2025-05-17 00:10:11.165 [INFO][5261] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" HandleID="k8s-pod-network.e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--kube--controllers--b7fc8b74f--tr4ng-eth0" May 17 00:10:11.171991 containerd[1482]: 2025-05-17 00:10:11.168 [INFO][5261] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:11.171991 containerd[1482]: 2025-05-17 00:10:11.170 [INFO][5252] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" May 17 00:10:11.173318 containerd[1482]: time="2025-05-17T00:10:11.173001857Z" level=info msg="TearDown network for sandbox \"e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830\" successfully" May 17 00:10:11.173318 containerd[1482]: time="2025-05-17T00:10:11.173041338Z" level=info msg="StopPodSandbox for \"e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830\" returns successfully" May 17 00:10:11.176145 containerd[1482]: time="2025-05-17T00:10:11.176079924Z" level=info msg="RemovePodSandbox for \"e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830\"" May 17 00:10:11.181006 containerd[1482]: time="2025-05-17T00:10:11.180951750Z" level=info msg="Forcibly stopping sandbox \"e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830\"" May 17 00:10:11.318784 containerd[1482]: 2025-05-17 00:10:11.255 [WARNING][5277] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e61ddff57a-k8s-calico--kube--controllers--b7fc8b74f--tr4ng-eth0", GenerateName:"calico-kube-controllers-b7fc8b74f-", Namespace:"calico-system", SelfLink:"", UID:"b9fba4a9-b127-4dfe-8a91-d1118962d299", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b7fc8b74f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e61ddff57a", ContainerID:"d41a6f92ee2bb260d50e241451bb67d468f06fdd990219c0b8d013eff6543ebb", Pod:"calico-kube-controllers-b7fc8b74f-tr4ng", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.22.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliabffd988ed4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:11.318784 containerd[1482]: 2025-05-17 00:10:11.257 [INFO][5277] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" May 17 00:10:11.318784 containerd[1482]: 2025-05-17 00:10:11.257 [INFO][5277] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" iface="eth0" netns="" May 17 00:10:11.318784 containerd[1482]: 2025-05-17 00:10:11.257 [INFO][5277] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" May 17 00:10:11.318784 containerd[1482]: 2025-05-17 00:10:11.257 [INFO][5277] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" May 17 00:10:11.318784 containerd[1482]: 2025-05-17 00:10:11.301 [INFO][5285] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" HandleID="k8s-pod-network.e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--kube--controllers--b7fc8b74f--tr4ng-eth0" May 17 00:10:11.318784 containerd[1482]: 2025-05-17 00:10:11.301 [INFO][5285] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:11.318784 containerd[1482]: 2025-05-17 00:10:11.301 [INFO][5285] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:11.318784 containerd[1482]: 2025-05-17 00:10:11.312 [WARNING][5285] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" HandleID="k8s-pod-network.e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--kube--controllers--b7fc8b74f--tr4ng-eth0" May 17 00:10:11.318784 containerd[1482]: 2025-05-17 00:10:11.312 [INFO][5285] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" HandleID="k8s-pod-network.e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--kube--controllers--b7fc8b74f--tr4ng-eth0" May 17 00:10:11.318784 containerd[1482]: 2025-05-17 00:10:11.315 [INFO][5285] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:11.318784 containerd[1482]: 2025-05-17 00:10:11.317 [INFO][5277] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830" May 17 00:10:11.319523 containerd[1482]: time="2025-05-17T00:10:11.318749353Z" level=info msg="TearDown network for sandbox \"e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830\" successfully" May 17 00:10:11.325070 containerd[1482]: time="2025-05-17T00:10:11.325004009Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:10:11.325239 containerd[1482]: time="2025-05-17T00:10:11.325095091Z" level=info msg="RemovePodSandbox \"e4f2591371d3edd791256b5ae2cc9735ba80f85716d9d7c0b9c83f58f663d830\" returns successfully" May 17 00:10:11.326942 containerd[1482]: time="2025-05-17T00:10:11.326896611Z" level=info msg="StopPodSandbox for \"03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1\"" May 17 00:10:11.464172 containerd[1482]: 2025-05-17 00:10:11.379 [WARNING][5299] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-whisker--7bc8db98b9--gxkgt-eth0" May 17 00:10:11.464172 containerd[1482]: 2025-05-17 00:10:11.380 [INFO][5299] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" May 17 00:10:11.464172 containerd[1482]: 2025-05-17 00:10:11.380 [INFO][5299] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" iface="eth0" netns="" May 17 00:10:11.464172 containerd[1482]: 2025-05-17 00:10:11.380 [INFO][5299] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" May 17 00:10:11.464172 containerd[1482]: 2025-05-17 00:10:11.380 [INFO][5299] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" May 17 00:10:11.464172 containerd[1482]: 2025-05-17 00:10:11.426 [INFO][5306] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" HandleID="k8s-pod-network.03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" Workload="ci--4081--3--3--n--e61ddff57a-k8s-whisker--7bc8db98b9--gxkgt-eth0" May 17 00:10:11.464172 containerd[1482]: 2025-05-17 00:10:11.427 [INFO][5306] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:11.464172 containerd[1482]: 2025-05-17 00:10:11.427 [INFO][5306] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:11.464172 containerd[1482]: 2025-05-17 00:10:11.449 [WARNING][5306] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" HandleID="k8s-pod-network.03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" Workload="ci--4081--3--3--n--e61ddff57a-k8s-whisker--7bc8db98b9--gxkgt-eth0" May 17 00:10:11.464172 containerd[1482]: 2025-05-17 00:10:11.449 [INFO][5306] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" HandleID="k8s-pod-network.03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" Workload="ci--4081--3--3--n--e61ddff57a-k8s-whisker--7bc8db98b9--gxkgt-eth0" May 17 00:10:11.464172 containerd[1482]: 2025-05-17 00:10:11.455 [INFO][5306] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:11.464172 containerd[1482]: 2025-05-17 00:10:11.458 [INFO][5299] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" May 17 00:10:11.464172 containerd[1482]: time="2025-05-17T00:10:11.463390345Z" level=info msg="TearDown network for sandbox \"03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1\" successfully" May 17 00:10:11.464172 containerd[1482]: time="2025-05-17T00:10:11.463434146Z" level=info msg="StopPodSandbox for \"03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1\" returns successfully" May 17 00:10:11.466271 containerd[1482]: time="2025-05-17T00:10:11.465363028Z" level=info msg="RemovePodSandbox for \"03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1\"" May 17 00:10:11.466271 containerd[1482]: time="2025-05-17T00:10:11.465396989Z" level=info msg="Forcibly stopping sandbox \"03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1\"" May 17 00:10:11.633524 containerd[1482]: 2025-05-17 00:10:11.553 [WARNING][5321] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" WorkloadEndpoint="ci--4081--3--3--n--e61ddff57a-k8s-whisker--7bc8db98b9--gxkgt-eth0" May 17 00:10:11.633524 containerd[1482]: 2025-05-17 00:10:11.553 [INFO][5321] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" May 17 00:10:11.633524 containerd[1482]: 2025-05-17 00:10:11.553 [INFO][5321] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" iface="eth0" netns="" May 17 00:10:11.633524 containerd[1482]: 2025-05-17 00:10:11.553 [INFO][5321] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" May 17 00:10:11.633524 containerd[1482]: 2025-05-17 00:10:11.553 [INFO][5321] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" May 17 00:10:11.633524 containerd[1482]: 2025-05-17 00:10:11.606 [INFO][5329] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" HandleID="k8s-pod-network.03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" Workload="ci--4081--3--3--n--e61ddff57a-k8s-whisker--7bc8db98b9--gxkgt-eth0" May 17 00:10:11.633524 containerd[1482]: 2025-05-17 00:10:11.606 [INFO][5329] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:11.633524 containerd[1482]: 2025-05-17 00:10:11.606 [INFO][5329] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:11.633524 containerd[1482]: 2025-05-17 00:10:11.626 [WARNING][5329] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" HandleID="k8s-pod-network.03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" Workload="ci--4081--3--3--n--e61ddff57a-k8s-whisker--7bc8db98b9--gxkgt-eth0" May 17 00:10:11.633524 containerd[1482]: 2025-05-17 00:10:11.627 [INFO][5329] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" HandleID="k8s-pod-network.03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" Workload="ci--4081--3--3--n--e61ddff57a-k8s-whisker--7bc8db98b9--gxkgt-eth0" May 17 00:10:11.633524 containerd[1482]: 2025-05-17 00:10:11.628 [INFO][5329] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:11.633524 containerd[1482]: 2025-05-17 00:10:11.632 [INFO][5321] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1" May 17 00:10:11.634396 containerd[1482]: time="2025-05-17T00:10:11.633582734Z" level=info msg="TearDown network for sandbox \"03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1\" successfully" May 17 00:10:11.665262 containerd[1482]: time="2025-05-17T00:10:11.665192103Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:10:11.665408 containerd[1482]: time="2025-05-17T00:10:11.665310665Z" level=info msg="RemovePodSandbox \"03a13ee4d5219eab1d198a40172c1940ee455fea57f73e0208bcffd2c81141f1\" returns successfully" May 17 00:10:11.665838 containerd[1482]: time="2025-05-17T00:10:11.665810956Z" level=info msg="StopPodSandbox for \"5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af\"" May 17 00:10:11.804986 containerd[1482]: 2025-05-17 00:10:11.744 [WARNING][5344] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e61ddff57a-k8s-goldmane--8f77d7b6c--522bq-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"c2a53ade-ef79-46ee-b2b9-636e3cc942be", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e61ddff57a", ContainerID:"1dc11b0228c72b6b08389d569cbeae584091380bf8a1bb8589e60e7058e83b73", Pod:"goldmane-8f77d7b6c-522bq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.22.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali43538a0f6e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:11.804986 containerd[1482]: 2025-05-17 00:10:11.745 [INFO][5344] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" May 17 00:10:11.804986 containerd[1482]: 2025-05-17 00:10:11.745 [INFO][5344] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" iface="eth0" netns="" May 17 00:10:11.804986 containerd[1482]: 2025-05-17 00:10:11.745 [INFO][5344] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" May 17 00:10:11.804986 containerd[1482]: 2025-05-17 00:10:11.745 [INFO][5344] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" May 17 00:10:11.804986 containerd[1482]: 2025-05-17 00:10:11.781 [INFO][5351] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" HandleID="k8s-pod-network.5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" Workload="ci--4081--3--3--n--e61ddff57a-k8s-goldmane--8f77d7b6c--522bq-eth0" May 17 00:10:11.804986 containerd[1482]: 2025-05-17 00:10:11.782 [INFO][5351] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:11.804986 containerd[1482]: 2025-05-17 00:10:11.782 [INFO][5351] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:11.804986 containerd[1482]: 2025-05-17 00:10:11.797 [WARNING][5351] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" HandleID="k8s-pod-network.5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" Workload="ci--4081--3--3--n--e61ddff57a-k8s-goldmane--8f77d7b6c--522bq-eth0" May 17 00:10:11.804986 containerd[1482]: 2025-05-17 00:10:11.797 [INFO][5351] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" HandleID="k8s-pod-network.5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" Workload="ci--4081--3--3--n--e61ddff57a-k8s-goldmane--8f77d7b6c--522bq-eth0" May 17 00:10:11.804986 containerd[1482]: 2025-05-17 00:10:11.801 [INFO][5351] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:11.804986 containerd[1482]: 2025-05-17 00:10:11.802 [INFO][5344] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" May 17 00:10:11.805407 containerd[1482]: time="2025-05-17T00:10:11.805035310Z" level=info msg="TearDown network for sandbox \"5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af\" successfully" May 17 00:10:11.805407 containerd[1482]: time="2025-05-17T00:10:11.805067111Z" level=info msg="StopPodSandbox for \"5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af\" returns successfully" May 17 00:10:11.807156 containerd[1482]: time="2025-05-17T00:10:11.807114875Z" level=info msg="RemovePodSandbox for \"5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af\"" May 17 00:10:11.807156 containerd[1482]: time="2025-05-17T00:10:11.807160836Z" level=info msg="Forcibly stopping sandbox \"5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af\"" May 17 00:10:11.950314 containerd[1482]: 2025-05-17 00:10:11.883 [WARNING][5365] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e61ddff57a-k8s-goldmane--8f77d7b6c--522bq-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"c2a53ade-ef79-46ee-b2b9-636e3cc942be", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e61ddff57a", ContainerID:"1dc11b0228c72b6b08389d569cbeae584091380bf8a1bb8589e60e7058e83b73", Pod:"goldmane-8f77d7b6c-522bq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.22.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali43538a0f6e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:11.950314 containerd[1482]: 2025-05-17 00:10:11.883 [INFO][5365] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" May 17 00:10:11.950314 containerd[1482]: 2025-05-17 00:10:11.883 [INFO][5365] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" iface="eth0" netns="" May 17 00:10:11.950314 containerd[1482]: 2025-05-17 00:10:11.883 [INFO][5365] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" May 17 00:10:11.950314 containerd[1482]: 2025-05-17 00:10:11.883 [INFO][5365] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" May 17 00:10:11.950314 containerd[1482]: 2025-05-17 00:10:11.923 [INFO][5372] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" HandleID="k8s-pod-network.5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" Workload="ci--4081--3--3--n--e61ddff57a-k8s-goldmane--8f77d7b6c--522bq-eth0" May 17 00:10:11.950314 containerd[1482]: 2025-05-17 00:10:11.923 [INFO][5372] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:11.950314 containerd[1482]: 2025-05-17 00:10:11.924 [INFO][5372] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:11.950314 containerd[1482]: 2025-05-17 00:10:11.939 [WARNING][5372] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" HandleID="k8s-pod-network.5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" Workload="ci--4081--3--3--n--e61ddff57a-k8s-goldmane--8f77d7b6c--522bq-eth0" May 17 00:10:11.950314 containerd[1482]: 2025-05-17 00:10:11.939 [INFO][5372] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" HandleID="k8s-pod-network.5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" Workload="ci--4081--3--3--n--e61ddff57a-k8s-goldmane--8f77d7b6c--522bq-eth0" May 17 00:10:11.950314 containerd[1482]: 2025-05-17 00:10:11.945 [INFO][5372] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:11.950314 containerd[1482]: 2025-05-17 00:10:11.946 [INFO][5365] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af" May 17 00:10:11.950314 containerd[1482]: time="2025-05-17T00:10:11.950037470Z" level=info msg="TearDown network for sandbox \"5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af\" successfully" May 17 00:10:11.957077 containerd[1482]: time="2025-05-17T00:10:11.956905860Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:10:11.957077 containerd[1482]: time="2025-05-17T00:10:11.957049463Z" level=info msg="RemovePodSandbox \"5dca6998a34eaafa08777f49af3a784cbe29e9630b06701ab3ec0f1bc4dbc6af\" returns successfully" May 17 00:10:11.958427 containerd[1482]: time="2025-05-17T00:10:11.958374452Z" level=info msg="StopPodSandbox for \"2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8\"" May 17 00:10:12.063784 containerd[1482]: 2025-05-17 00:10:12.017 [WARNING][5387] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--cq6gh-eth0", GenerateName:"calico-apiserver-75d5dfcff5-", Namespace:"calico-apiserver", SelfLink:"", UID:"5f0c7770-d646-424e-9f31-ff80f202f1a8", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75d5dfcff5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e61ddff57a", ContainerID:"e8c61ea0f67fb1bd6602c6bab54aad1c55958fa13c29d8f02710564364d597ea", Pod:"calico-apiserver-75d5dfcff5-cq6gh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali254821d9f24", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:12.063784 containerd[1482]: 2025-05-17 00:10:12.018 [INFO][5387] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" May 17 00:10:12.063784 containerd[1482]: 2025-05-17 00:10:12.018 [INFO][5387] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" iface="eth0" netns="" May 17 00:10:12.063784 containerd[1482]: 2025-05-17 00:10:12.018 [INFO][5387] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" May 17 00:10:12.063784 containerd[1482]: 2025-05-17 00:10:12.018 [INFO][5387] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" May 17 00:10:12.063784 containerd[1482]: 2025-05-17 00:10:12.045 [INFO][5394] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" HandleID="k8s-pod-network.2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--cq6gh-eth0" May 17 00:10:12.063784 containerd[1482]: 2025-05-17 00:10:12.045 [INFO][5394] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:12.063784 containerd[1482]: 2025-05-17 00:10:12.045 [INFO][5394] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:12.063784 containerd[1482]: 2025-05-17 00:10:12.056 [WARNING][5394] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" HandleID="k8s-pod-network.2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--cq6gh-eth0" May 17 00:10:12.063784 containerd[1482]: 2025-05-17 00:10:12.057 [INFO][5394] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" HandleID="k8s-pod-network.2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--cq6gh-eth0" May 17 00:10:12.063784 containerd[1482]: 2025-05-17 00:10:12.059 [INFO][5394] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:12.063784 containerd[1482]: 2025-05-17 00:10:12.061 [INFO][5387] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" May 17 00:10:12.063784 containerd[1482]: time="2025-05-17T00:10:12.063465472Z" level=info msg="TearDown network for sandbox \"2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8\" successfully" May 17 00:10:12.063784 containerd[1482]: time="2025-05-17T00:10:12.063493912Z" level=info msg="StopPodSandbox for \"2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8\" returns successfully" May 17 00:10:12.068369 containerd[1482]: time="2025-05-17T00:10:12.065363433Z" level=info msg="RemovePodSandbox for \"2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8\"" May 17 00:10:12.068369 containerd[1482]: time="2025-05-17T00:10:12.065465756Z" level=info msg="Forcibly stopping sandbox \"2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8\"" May 17 00:10:12.171806 containerd[1482]: 2025-05-17 00:10:12.119 [WARNING][5409] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--cq6gh-eth0", GenerateName:"calico-apiserver-75d5dfcff5-", Namespace:"calico-apiserver", SelfLink:"", UID:"5f0c7770-d646-424e-9f31-ff80f202f1a8", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75d5dfcff5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e61ddff57a", ContainerID:"e8c61ea0f67fb1bd6602c6bab54aad1c55958fa13c29d8f02710564364d597ea", Pod:"calico-apiserver-75d5dfcff5-cq6gh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali254821d9f24", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:12.171806 containerd[1482]: 2025-05-17 00:10:12.124 [INFO][5409] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" May 17 00:10:12.171806 containerd[1482]: 2025-05-17 00:10:12.124 [INFO][5409] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" iface="eth0" netns="" May 17 00:10:12.171806 containerd[1482]: 2025-05-17 00:10:12.124 [INFO][5409] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" May 17 00:10:12.171806 containerd[1482]: 2025-05-17 00:10:12.124 [INFO][5409] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" May 17 00:10:12.171806 containerd[1482]: 2025-05-17 00:10:12.154 [INFO][5416] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" HandleID="k8s-pod-network.2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--cq6gh-eth0" May 17 00:10:12.171806 containerd[1482]: 2025-05-17 00:10:12.154 [INFO][5416] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:12.171806 containerd[1482]: 2025-05-17 00:10:12.154 [INFO][5416] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:12.171806 containerd[1482]: 2025-05-17 00:10:12.164 [WARNING][5416] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" HandleID="k8s-pod-network.2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--cq6gh-eth0" May 17 00:10:12.171806 containerd[1482]: 2025-05-17 00:10:12.164 [INFO][5416] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" HandleID="k8s-pod-network.2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--cq6gh-eth0" May 17 00:10:12.171806 containerd[1482]: 2025-05-17 00:10:12.166 [INFO][5416] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:12.171806 containerd[1482]: 2025-05-17 00:10:12.169 [INFO][5409] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8" May 17 00:10:12.172271 containerd[1482]: time="2025-05-17T00:10:12.171863531Z" level=info msg="TearDown network for sandbox \"2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8\" successfully" May 17 00:10:12.176316 containerd[1482]: time="2025-05-17T00:10:12.176248628Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:10:12.176452 containerd[1482]: time="2025-05-17T00:10:12.176341030Z" level=info msg="RemovePodSandbox \"2c5263c67206c435400709369a39654c12ff69c0752848125b4d962af31ac7e8\" returns successfully" May 17 00:10:12.177014 containerd[1482]: time="2025-05-17T00:10:12.176978724Z" level=info msg="StopPodSandbox for \"5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c\"" May 17 00:10:12.294893 containerd[1482]: 2025-05-17 00:10:12.238 [WARNING][5430] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--cvkpr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"65842dcb-5f73-429f-9e8d-0092e98ecc3e", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e61ddff57a", ContainerID:"ff2a257f8c304b4e91d641e251d31a9716426dea10ca1b94df828529592b303a", Pod:"coredns-7c65d6cfc9-cvkpr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0f4c78ed31", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:12.294893 containerd[1482]: 2025-05-17 00:10:12.238 [INFO][5430] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" May 17 00:10:12.294893 containerd[1482]: 2025-05-17 00:10:12.238 [INFO][5430] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" iface="eth0" netns="" May 17 00:10:12.294893 containerd[1482]: 2025-05-17 00:10:12.238 [INFO][5430] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" May 17 00:10:12.294893 containerd[1482]: 2025-05-17 00:10:12.238 [INFO][5430] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" May 17 00:10:12.294893 containerd[1482]: 2025-05-17 00:10:12.272 [INFO][5437] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" HandleID="k8s-pod-network.5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" Workload="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--cvkpr-eth0" May 17 00:10:12.294893 containerd[1482]: 2025-05-17 00:10:12.273 [INFO][5437] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:12.294893 containerd[1482]: 2025-05-17 00:10:12.273 [INFO][5437] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:12.294893 containerd[1482]: 2025-05-17 00:10:12.289 [WARNING][5437] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" HandleID="k8s-pod-network.5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" Workload="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--cvkpr-eth0" May 17 00:10:12.294893 containerd[1482]: 2025-05-17 00:10:12.289 [INFO][5437] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" HandleID="k8s-pod-network.5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" Workload="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--cvkpr-eth0" May 17 00:10:12.294893 containerd[1482]: 2025-05-17 00:10:12.291 [INFO][5437] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:12.294893 containerd[1482]: 2025-05-17 00:10:12.293 [INFO][5430] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" May 17 00:10:12.294893 containerd[1482]: time="2025-05-17T00:10:12.294820391Z" level=info msg="TearDown network for sandbox \"5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c\" successfully" May 17 00:10:12.294893 containerd[1482]: time="2025-05-17T00:10:12.294848511Z" level=info msg="StopPodSandbox for \"5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c\" returns successfully" May 17 00:10:12.296384 containerd[1482]: time="2025-05-17T00:10:12.296336784Z" level=info msg="RemovePodSandbox for \"5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c\"" May 17 00:10:12.296384 containerd[1482]: time="2025-05-17T00:10:12.296386465Z" level=info msg="Forcibly stopping sandbox \"5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c\"" May 17 00:10:12.397827 containerd[1482]: 2025-05-17 00:10:12.346 [WARNING][5451] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--cvkpr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"65842dcb-5f73-429f-9e8d-0092e98ecc3e", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e61ddff57a", ContainerID:"ff2a257f8c304b4e91d641e251d31a9716426dea10ca1b94df828529592b303a", Pod:"coredns-7c65d6cfc9-cvkpr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0f4c78ed31", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:12.397827 containerd[1482]: 2025-05-17 00:10:12.347 [INFO][5451] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" May 17 00:10:12.397827 containerd[1482]: 2025-05-17 00:10:12.347 [INFO][5451] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" iface="eth0" netns="" May 17 00:10:12.397827 containerd[1482]: 2025-05-17 00:10:12.347 [INFO][5451] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" May 17 00:10:12.397827 containerd[1482]: 2025-05-17 00:10:12.347 [INFO][5451] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" May 17 00:10:12.397827 containerd[1482]: 2025-05-17 00:10:12.373 [INFO][5458] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" HandleID="k8s-pod-network.5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" Workload="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--cvkpr-eth0" May 17 00:10:12.397827 containerd[1482]: 2025-05-17 00:10:12.373 [INFO][5458] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:12.397827 containerd[1482]: 2025-05-17 00:10:12.373 [INFO][5458] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:12.397827 containerd[1482]: 2025-05-17 00:10:12.388 [WARNING][5458] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" HandleID="k8s-pod-network.5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" Workload="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--cvkpr-eth0" May 17 00:10:12.397827 containerd[1482]: 2025-05-17 00:10:12.388 [INFO][5458] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" HandleID="k8s-pod-network.5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" Workload="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--cvkpr-eth0" May 17 00:10:12.397827 containerd[1482]: 2025-05-17 00:10:12.391 [INFO][5458] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:12.397827 containerd[1482]: 2025-05-17 00:10:12.393 [INFO][5451] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c" May 17 00:10:12.397827 containerd[1482]: time="2025-05-17T00:10:12.395927050Z" level=info msg="TearDown network for sandbox \"5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c\" successfully" May 17 00:10:12.401846 containerd[1482]: time="2025-05-17T00:10:12.401625696Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:10:12.401846 containerd[1482]: time="2025-05-17T00:10:12.401702937Z" level=info msg="RemovePodSandbox \"5ddd748a415ca45be339a78cd8f1b01d3c170c3abe79f132299b15b7b570db0c\" returns successfully" May 17 00:10:12.402810 containerd[1482]: time="2025-05-17T00:10:12.402463074Z" level=info msg="StopPodSandbox for \"b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b\"" May 17 00:10:12.527186 containerd[1482]: 2025-05-17 00:10:12.457 [WARNING][5472] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e61ddff57a-k8s-csi--node--driver--hhzhx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"35348305-3030-4a37-b275-4a2201a5f384", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e61ddff57a", ContainerID:"11ea744b6a81a7c34ee26e1f9d5f37e5c5a189809d0335a885f8c253b9becf69", Pod:"csi-node-driver-hhzhx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.22.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7a3cccafb70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:12.527186 containerd[1482]: 2025-05-17 00:10:12.458 [INFO][5472] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" May 17 00:10:12.527186 containerd[1482]: 2025-05-17 00:10:12.458 [INFO][5472] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" iface="eth0" netns="" May 17 00:10:12.527186 containerd[1482]: 2025-05-17 00:10:12.458 [INFO][5472] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" May 17 00:10:12.527186 containerd[1482]: 2025-05-17 00:10:12.458 [INFO][5472] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" May 17 00:10:12.527186 containerd[1482]: 2025-05-17 00:10:12.507 [INFO][5479] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" HandleID="k8s-pod-network.b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" Workload="ci--4081--3--3--n--e61ddff57a-k8s-csi--node--driver--hhzhx-eth0" May 17 00:10:12.527186 containerd[1482]: 2025-05-17 00:10:12.508 [INFO][5479] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:12.527186 containerd[1482]: 2025-05-17 00:10:12.508 [INFO][5479] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:12.527186 containerd[1482]: 2025-05-17 00:10:12.519 [WARNING][5479] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" HandleID="k8s-pod-network.b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" Workload="ci--4081--3--3--n--e61ddff57a-k8s-csi--node--driver--hhzhx-eth0" May 17 00:10:12.527186 containerd[1482]: 2025-05-17 00:10:12.519 [INFO][5479] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" HandleID="k8s-pod-network.b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" Workload="ci--4081--3--3--n--e61ddff57a-k8s-csi--node--driver--hhzhx-eth0" May 17 00:10:12.527186 containerd[1482]: 2025-05-17 00:10:12.521 [INFO][5479] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:12.527186 containerd[1482]: 2025-05-17 00:10:12.524 [INFO][5472] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" May 17 00:10:12.528357 containerd[1482]: time="2025-05-17T00:10:12.527317415Z" level=info msg="TearDown network for sandbox \"b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b\" successfully" May 17 00:10:12.528357 containerd[1482]: time="2025-05-17T00:10:12.527828906Z" level=info msg="StopPodSandbox for \"b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b\" returns successfully" May 17 00:10:12.531172 containerd[1482]: time="2025-05-17T00:10:12.531078777Z" level=info msg="RemovePodSandbox for \"b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b\"" May 17 00:10:12.531572 containerd[1482]: time="2025-05-17T00:10:12.531392184Z" level=info msg="Forcibly stopping sandbox \"b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b\"" May 17 00:10:12.697633 containerd[1482]: 2025-05-17 00:10:12.606 [WARNING][5493] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e61ddff57a-k8s-csi--node--driver--hhzhx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"35348305-3030-4a37-b275-4a2201a5f384", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e61ddff57a", ContainerID:"11ea744b6a81a7c34ee26e1f9d5f37e5c5a189809d0335a885f8c253b9becf69", Pod:"csi-node-driver-hhzhx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.22.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7a3cccafb70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:12.697633 containerd[1482]: 2025-05-17 00:10:12.607 [INFO][5493] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" May 17 00:10:12.697633 containerd[1482]: 2025-05-17 00:10:12.608 [INFO][5493] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" iface="eth0" netns="" May 17 00:10:12.697633 containerd[1482]: 2025-05-17 00:10:12.608 [INFO][5493] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" May 17 00:10:12.697633 containerd[1482]: 2025-05-17 00:10:12.608 [INFO][5493] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" May 17 00:10:12.697633 containerd[1482]: 2025-05-17 00:10:12.671 [INFO][5500] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" HandleID="k8s-pod-network.b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" Workload="ci--4081--3--3--n--e61ddff57a-k8s-csi--node--driver--hhzhx-eth0" May 17 00:10:12.697633 containerd[1482]: 2025-05-17 00:10:12.671 [INFO][5500] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:12.697633 containerd[1482]: 2025-05-17 00:10:12.671 [INFO][5500] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:12.697633 containerd[1482]: 2025-05-17 00:10:12.690 [WARNING][5500] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" HandleID="k8s-pod-network.b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" Workload="ci--4081--3--3--n--e61ddff57a-k8s-csi--node--driver--hhzhx-eth0" May 17 00:10:12.697633 containerd[1482]: 2025-05-17 00:10:12.690 [INFO][5500] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" HandleID="k8s-pod-network.b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" Workload="ci--4081--3--3--n--e61ddff57a-k8s-csi--node--driver--hhzhx-eth0" May 17 00:10:12.697633 containerd[1482]: 2025-05-17 00:10:12.693 [INFO][5500] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:12.697633 containerd[1482]: 2025-05-17 00:10:12.695 [INFO][5493] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b" May 17 00:10:12.700872 containerd[1482]: time="2025-05-17T00:10:12.700383734Z" level=info msg="TearDown network for sandbox \"b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b\" successfully" May 17 00:10:12.705430 containerd[1482]: time="2025-05-17T00:10:12.705119758Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:10:12.705430 containerd[1482]: time="2025-05-17T00:10:12.705203280Z" level=info msg="RemovePodSandbox \"b8d19c2f1a2287dae8c07d25aeda4f0e4d19383d3ae94d2775f9fcd14dbc2f3b\" returns successfully" May 17 00:10:12.707849 containerd[1482]: time="2025-05-17T00:10:12.705748372Z" level=info msg="StopPodSandbox for \"8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666\"" May 17 00:10:12.815806 containerd[1482]: 2025-05-17 00:10:12.762 [WARNING][5514] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--r5wvd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fd2d0fe1-ef42-47d9-a89f-b98fdf94081f", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e61ddff57a", ContainerID:"b564ddcdd62285a6973e20c258e57c8edb7887d2d96372cb8d816605437a36c8", Pod:"coredns-7c65d6cfc9-r5wvd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic2abcc195e9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:12.815806 containerd[1482]: 2025-05-17 00:10:12.762 [INFO][5514] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" May 17 00:10:12.815806 containerd[1482]: 2025-05-17 00:10:12.762 [INFO][5514] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" iface="eth0" netns="" May 17 00:10:12.815806 containerd[1482]: 2025-05-17 00:10:12.762 [INFO][5514] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" May 17 00:10:12.815806 containerd[1482]: 2025-05-17 00:10:12.762 [INFO][5514] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" May 17 00:10:12.815806 containerd[1482]: 2025-05-17 00:10:12.793 [INFO][5521] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" HandleID="k8s-pod-network.8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" Workload="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--r5wvd-eth0" May 17 00:10:12.815806 containerd[1482]: 2025-05-17 00:10:12.793 [INFO][5521] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:12.815806 containerd[1482]: 2025-05-17 00:10:12.793 [INFO][5521] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:12.815806 containerd[1482]: 2025-05-17 00:10:12.809 [WARNING][5521] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" HandleID="k8s-pod-network.8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" Workload="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--r5wvd-eth0" May 17 00:10:12.815806 containerd[1482]: 2025-05-17 00:10:12.809 [INFO][5521] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" HandleID="k8s-pod-network.8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" Workload="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--r5wvd-eth0" May 17 00:10:12.815806 containerd[1482]: 2025-05-17 00:10:12.812 [INFO][5521] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:12.815806 containerd[1482]: 2025-05-17 00:10:12.814 [INFO][5514] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" May 17 00:10:12.816664 containerd[1482]: time="2025-05-17T00:10:12.816627726Z" level=info msg="TearDown network for sandbox \"8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666\" successfully" May 17 00:10:12.816664 containerd[1482]: time="2025-05-17T00:10:12.816665287Z" level=info msg="StopPodSandbox for \"8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666\" returns successfully" May 17 00:10:12.817333 containerd[1482]: time="2025-05-17T00:10:12.817300901Z" level=info msg="RemovePodSandbox for \"8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666\"" May 17 00:10:12.817454 containerd[1482]: time="2025-05-17T00:10:12.817342502Z" level=info msg="Forcibly stopping sandbox \"8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666\"" May 17 00:10:12.920022 containerd[1482]: 2025-05-17 00:10:12.876 [WARNING][5535] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--r5wvd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fd2d0fe1-ef42-47d9-a89f-b98fdf94081f", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e61ddff57a", ContainerID:"b564ddcdd62285a6973e20c258e57c8edb7887d2d96372cb8d816605437a36c8", Pod:"coredns-7c65d6cfc9-r5wvd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic2abcc195e9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:12.920022 containerd[1482]: 2025-05-17 00:10:12.876 [INFO][5535] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" May 17 00:10:12.920022 containerd[1482]: 2025-05-17 00:10:12.876 [INFO][5535] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" iface="eth0" netns="" May 17 00:10:12.920022 containerd[1482]: 2025-05-17 00:10:12.876 [INFO][5535] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" May 17 00:10:12.920022 containerd[1482]: 2025-05-17 00:10:12.876 [INFO][5535] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" May 17 00:10:12.920022 containerd[1482]: 2025-05-17 00:10:12.901 [INFO][5542] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" HandleID="k8s-pod-network.8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" Workload="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--r5wvd-eth0" May 17 00:10:12.920022 containerd[1482]: 2025-05-17 00:10:12.901 [INFO][5542] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:12.920022 containerd[1482]: 2025-05-17 00:10:12.901 [INFO][5542] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:12.920022 containerd[1482]: 2025-05-17 00:10:12.912 [WARNING][5542] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" HandleID="k8s-pod-network.8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" Workload="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--r5wvd-eth0" May 17 00:10:12.920022 containerd[1482]: 2025-05-17 00:10:12.912 [INFO][5542] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" HandleID="k8s-pod-network.8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" Workload="ci--4081--3--3--n--e61ddff57a-k8s-coredns--7c65d6cfc9--r5wvd-eth0" May 17 00:10:12.920022 containerd[1482]: 2025-05-17 00:10:12.915 [INFO][5542] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:12.920022 containerd[1482]: 2025-05-17 00:10:12.917 [INFO][5535] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666" May 17 00:10:12.920437 containerd[1482]: time="2025-05-17T00:10:12.920075437Z" level=info msg="TearDown network for sandbox \"8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666\" successfully" May 17 00:10:12.925143 containerd[1482]: time="2025-05-17T00:10:12.924564856Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:10:12.925143 containerd[1482]: time="2025-05-17T00:10:12.924717339Z" level=info msg="RemovePodSandbox \"8cef55272004fa7f71d47a60184f91417c07e611093881b643bc5e0cdce08666\" returns successfully" May 17 00:10:12.925829 containerd[1482]: time="2025-05-17T00:10:12.925745922Z" level=info msg="StopPodSandbox for \"1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753\"" May 17 00:10:13.024734 containerd[1482]: 2025-05-17 00:10:12.969 [WARNING][5556] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--dhnhf-eth0", GenerateName:"calico-apiserver-75d5dfcff5-", Namespace:"calico-apiserver", SelfLink:"", UID:"96303885-efee-4645-a557-34a808ca80dd", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75d5dfcff5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e61ddff57a", ContainerID:"5040c502408a5c3c469fff1f9e22c61b1c36fe2d19574300e62dc419109b5e70", Pod:"calico-apiserver-75d5dfcff5-dhnhf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8a9886a35df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:13.024734 containerd[1482]: 2025-05-17 00:10:12.969 [INFO][5556] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" May 17 00:10:13.024734 containerd[1482]: 2025-05-17 00:10:12.969 [INFO][5556] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" iface="eth0" netns="" May 17 00:10:13.024734 containerd[1482]: 2025-05-17 00:10:12.969 [INFO][5556] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" May 17 00:10:13.024734 containerd[1482]: 2025-05-17 00:10:12.969 [INFO][5556] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" May 17 00:10:13.024734 containerd[1482]: 2025-05-17 00:10:13.005 [INFO][5564] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" HandleID="k8s-pod-network.1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--dhnhf-eth0" May 17 00:10:13.024734 containerd[1482]: 2025-05-17 00:10:13.006 [INFO][5564] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:13.024734 containerd[1482]: 2025-05-17 00:10:13.006 [INFO][5564] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:13.024734 containerd[1482]: 2025-05-17 00:10:13.018 [WARNING][5564] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" HandleID="k8s-pod-network.1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--dhnhf-eth0" May 17 00:10:13.024734 containerd[1482]: 2025-05-17 00:10:13.018 [INFO][5564] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" HandleID="k8s-pod-network.1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--dhnhf-eth0" May 17 00:10:13.024734 containerd[1482]: 2025-05-17 00:10:13.020 [INFO][5564] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:13.024734 containerd[1482]: 2025-05-17 00:10:13.022 [INFO][5556] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" May 17 00:10:13.025527 containerd[1482]: time="2025-05-17T00:10:13.024713378Z" level=info msg="TearDown network for sandbox \"1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753\" successfully" May 17 00:10:13.025527 containerd[1482]: time="2025-05-17T00:10:13.024814820Z" level=info msg="StopPodSandbox for \"1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753\" returns successfully" May 17 00:10:13.028301 containerd[1482]: time="2025-05-17T00:10:13.028261977Z" level=info msg="RemovePodSandbox for \"1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753\"" May 17 00:10:13.028425 containerd[1482]: time="2025-05-17T00:10:13.028303418Z" level=info msg="Forcibly stopping sandbox \"1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753\"" May 17 00:10:13.135916 containerd[1482]: 2025-05-17 00:10:13.084 [WARNING][5578] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--dhnhf-eth0", GenerateName:"calico-apiserver-75d5dfcff5-", Namespace:"calico-apiserver", SelfLink:"", UID:"96303885-efee-4645-a557-34a808ca80dd", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75d5dfcff5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e61ddff57a", ContainerID:"5040c502408a5c3c469fff1f9e22c61b1c36fe2d19574300e62dc419109b5e70", Pod:"calico-apiserver-75d5dfcff5-dhnhf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8a9886a35df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:13.135916 containerd[1482]: 2025-05-17 00:10:13.085 [INFO][5578] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" May 17 00:10:13.135916 containerd[1482]: 2025-05-17 00:10:13.085 [INFO][5578] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" iface="eth0" netns="" May 17 00:10:13.135916 containerd[1482]: 2025-05-17 00:10:13.085 [INFO][5578] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" May 17 00:10:13.135916 containerd[1482]: 2025-05-17 00:10:13.085 [INFO][5578] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" May 17 00:10:13.135916 containerd[1482]: 2025-05-17 00:10:13.115 [INFO][5585] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" HandleID="k8s-pod-network.1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--dhnhf-eth0" May 17 00:10:13.135916 containerd[1482]: 2025-05-17 00:10:13.115 [INFO][5585] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:13.135916 containerd[1482]: 2025-05-17 00:10:13.115 [INFO][5585] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:13.135916 containerd[1482]: 2025-05-17 00:10:13.129 [WARNING][5585] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" HandleID="k8s-pod-network.1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--dhnhf-eth0" May 17 00:10:13.135916 containerd[1482]: 2025-05-17 00:10:13.129 [INFO][5585] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" HandleID="k8s-pod-network.1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" Workload="ci--4081--3--3--n--e61ddff57a-k8s-calico--apiserver--75d5dfcff5--dhnhf-eth0" May 17 00:10:13.135916 containerd[1482]: 2025-05-17 00:10:13.132 [INFO][5585] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:13.135916 containerd[1482]: 2025-05-17 00:10:13.133 [INFO][5578] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753" May 17 00:10:13.135916 containerd[1482]: time="2025-05-17T00:10:13.135915917Z" level=info msg="TearDown network for sandbox \"1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753\" successfully" May 17 00:10:13.141862 containerd[1482]: time="2025-05-17T00:10:13.141754806Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:10:13.142494 containerd[1482]: time="2025-05-17T00:10:13.141885489Z" level=info msg="RemovePodSandbox \"1470589ad3d85bed50809e5c1d62ce47ce50fd9f34811031c6e14f5e6f9ac753\" returns successfully" May 17 00:10:13.935222 systemd[1]: run-containerd-runc-k8s.io-1a167093409b134776f3b18cce3f7eab0acbe6d5aa45db92c1b7d79ff8c91d29-runc.AbkHQr.mount: Deactivated successfully. May 17 00:10:18.100128 containerd[1482]: time="2025-05-17T00:10:18.100079140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:10:18.118673 kubelet[2692]: I0517 00:10:18.118588 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-hhzhx" podStartSLOduration=34.206595386 podStartE2EDuration="44.118566002s" podCreationTimestamp="2025-05-17 00:09:34 +0000 UTC" firstStartedPulling="2025-05-17 00:09:59.717599381 +0000 UTC m=+48.763798744" lastFinishedPulling="2025-05-17 00:10:09.629569997 +0000 UTC m=+58.675769360" observedRunningTime="2025-05-17 00:10:10.535990573 +0000 UTC m=+59.582189936" watchObservedRunningTime="2025-05-17 00:10:18.118566002 +0000 UTC m=+67.164765365" May 17 00:10:18.335212 containerd[1482]: time="2025-05-17T00:10:18.334417889Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:10:18.336263 containerd[1482]: time="2025-05-17T00:10:18.336140728Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:10:18.336263 containerd[1482]: time="2025-05-17T00:10:18.336230650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:10:18.337155 kubelet[2692]: E0517 00:10:18.336519 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:10:18.337155 kubelet[2692]: E0517 00:10:18.336571 2692 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:10:18.337155 kubelet[2692]: E0517 00:10:18.336690 2692 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ttgj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-522bq_calico-system(c2a53ade-ef79-46ee-b2b9-636e3cc942be): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:10:18.337917 kubelet[2692]: E0517 00:10:18.337857 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-522bq" podUID="c2a53ade-ef79-46ee-b2b9-636e3cc942be" May 17 00:10:21.101631 kubelet[2692]: E0517 00:10:21.101549 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d874b469-vd68m" podUID="0d2cf33f-bbcd-48f5-a11a-d546875d4c3f" May 17 00:10:28.545676 kubelet[2692]: I0517 00:10:28.545468 2692 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:10:30.805581 kubelet[2692]: I0517 00:10:30.804922 2692 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:10:32.101788 containerd[1482]: time="2025-05-17T00:10:32.101702871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:10:32.329282 containerd[1482]: time="2025-05-17T00:10:32.329209683Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:10:32.330720 containerd[1482]: time="2025-05-17T00:10:32.330638558Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:10:32.331929 containerd[1482]: time="2025-05-17T00:10:32.330886324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:10:32.332076 kubelet[2692]: E0517 00:10:32.331152 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:10:32.332076 kubelet[2692]: E0517 00:10:32.331215 2692 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:10:32.332076 kubelet[2692]: E0517 00:10:32.331398 2692 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d2c11851cda5488fbb5d694e1e602685,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xdj9s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66d874b469-vd68m_calico-system(0d2cf33f-bbcd-48f5-a11a-d546875d4c3f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:10:32.335636 containerd[1482]: time="2025-05-17T00:10:32.335358872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:10:32.557930 containerd[1482]: time="2025-05-17T00:10:32.557869202Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:10:32.559236 containerd[1482]: time="2025-05-17T00:10:32.559177594Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:10:32.559421 containerd[1482]: time="2025-05-17T00:10:32.559212275Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:10:32.559639 kubelet[2692]: E0517 00:10:32.559583 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:10:32.559639 kubelet[2692]: E0517 00:10:32.559661 2692 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:10:32.559942 kubelet[2692]: E0517 00:10:32.559845 2692 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xdj9s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66d874b469-vd68m_calico-system(0d2cf33f-bbcd-48f5-a11a-d546875d4c3f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:10:32.561565 kubelet[2692]: E0517 00:10:32.561465 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-66d874b469-vd68m" podUID="0d2cf33f-bbcd-48f5-a11a-d546875d4c3f" May 17 00:10:33.102004 kubelet[2692]: E0517 00:10:33.101950 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-522bq" podUID="c2a53ade-ef79-46ee-b2b9-636e3cc942be" May 17 00:10:47.100504 kubelet[2692]: E0517 00:10:47.100421 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d874b469-vd68m" podUID="0d2cf33f-bbcd-48f5-a11a-d546875d4c3f" May 17 00:10:48.101584 containerd[1482]: time="2025-05-17T00:10:48.101522807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:10:48.338862 containerd[1482]: time="2025-05-17T00:10:48.338600183Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:10:48.340523 containerd[1482]: time="2025-05-17T00:10:48.340455950Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:10:48.340905 containerd[1482]: time="2025-05-17T00:10:48.340484030Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:10:48.341283 kubelet[2692]: E0517 00:10:48.341047 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:10:48.341283 kubelet[2692]: E0517 00:10:48.341107 2692 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:10:48.341283 kubelet[2692]: E0517 00:10:48.341235 2692 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ttgj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-522bq_calico-system(c2a53ade-ef79-46ee-b2b9-636e3cc942be): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:10:48.343316 kubelet[2692]: E0517 00:10:48.342920 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-522bq" podUID="c2a53ade-ef79-46ee-b2b9-636e3cc942be" May 17 00:11:01.102202 kubelet[2692]: E0517 00:11:01.100631 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-522bq" podUID="c2a53ade-ef79-46ee-b2b9-636e3cc942be" May 17 00:11:01.104231 kubelet[2692]: E0517 00:11:01.102900 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d874b469-vd68m" podUID="0d2cf33f-bbcd-48f5-a11a-d546875d4c3f" May 17 00:11:14.099689 kubelet[2692]: E0517 00:11:14.099386 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-522bq" podUID="c2a53ade-ef79-46ee-b2b9-636e3cc942be" May 17 00:11:15.099730 containerd[1482]: time="2025-05-17T00:11:15.099634330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:11:15.334089 containerd[1482]: time="2025-05-17T00:11:15.333938657Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:11:15.335557 containerd[1482]: time="2025-05-17T00:11:15.335490058Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:11:15.335730 containerd[1482]: time="2025-05-17T00:11:15.335643295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:11:15.335976 kubelet[2692]: E0517 00:11:15.335917 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:11:15.336749 kubelet[2692]: E0517 00:11:15.335988 2692 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:11:15.336749 kubelet[2692]: E0517 00:11:15.336128 2692 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d2c11851cda5488fbb5d694e1e602685,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xdj9s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66d874b469-vd68m_calico-system(0d2cf33f-bbcd-48f5-a11a-d546875d4c3f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:11:15.338500 containerd[1482]: time="2025-05-17T00:11:15.338287950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:11:15.582305 containerd[1482]: time="2025-05-17T00:11:15.582044244Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:11:15.584663 containerd[1482]: time="2025-05-17T00:11:15.584457225Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:11:15.584663 containerd[1482]: time="2025-05-17T00:11:15.584618821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:11:15.585817 kubelet[2692]: E0517 00:11:15.585515 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:11:15.585817 kubelet[2692]: E0517 00:11:15.585622 2692 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:11:15.586426 kubelet[2692]: E0517 00:11:15.585840 2692 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xdj9s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66d874b469-vd68m_calico-system(0d2cf33f-bbcd-48f5-a11a-d546875d4c3f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:11:15.587885 kubelet[2692]: E0517 00:11:15.587602 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-66d874b469-vd68m" podUID="0d2cf33f-bbcd-48f5-a11a-d546875d4c3f" May 17 00:11:25.098848 kubelet[2692]: E0517 00:11:25.097886 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-522bq" podUID="c2a53ade-ef79-46ee-b2b9-636e3cc942be" May 17 00:11:27.103036 kubelet[2692]: E0517 00:11:27.102951 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d874b469-vd68m" podUID="0d2cf33f-bbcd-48f5-a11a-d546875d4c3f" May 17 00:11:40.100624 containerd[1482]: time="2025-05-17T00:11:40.100134984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:11:40.325785 containerd[1482]: time="2025-05-17T00:11:40.325588643Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:11:40.328507 containerd[1482]: time="2025-05-17T00:11:40.328353656Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:11:40.328996 containerd[1482]: time="2025-05-17T00:11:40.328615733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:11:40.329183 kubelet[2692]: E0517 00:11:40.328905 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:11:40.329183 kubelet[2692]: E0517 00:11:40.328988 2692 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:11:40.330192 kubelet[2692]: E0517 00:11:40.329170 2692 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ttgj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-522bq_calico-system(c2a53ade-ef79-46ee-b2b9-636e3cc942be): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:11:40.330578 kubelet[2692]: E0517 00:11:40.330511 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-522bq" podUID="c2a53ade-ef79-46ee-b2b9-636e3cc942be" May 17 00:11:41.100708 kubelet[2692]: E0517 00:11:41.100630 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d874b469-vd68m" podUID="0d2cf33f-bbcd-48f5-a11a-d546875d4c3f" May 17 00:11:53.099215 kubelet[2692]: E0517 00:11:53.099069 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-522bq" podUID="c2a53ade-ef79-46ee-b2b9-636e3cc942be" May 17 00:11:54.099490 kubelet[2692]: E0517 00:11:54.099174 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d874b469-vd68m" podUID="0d2cf33f-bbcd-48f5-a11a-d546875d4c3f" May 17 00:12:06.101317 kubelet[2692]: E0517 00:12:06.101220 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d874b469-vd68m" podUID="0d2cf33f-bbcd-48f5-a11a-d546875d4c3f" May 17 00:12:08.099697 kubelet[2692]: E0517 00:12:08.099565 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-522bq" podUID="c2a53ade-ef79-46ee-b2b9-636e3cc942be" May 17 00:12:21.100667 kubelet[2692]: E0517 00:12:21.100131 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d874b469-vd68m" podUID="0d2cf33f-bbcd-48f5-a11a-d546875d4c3f" May 17 00:12:22.097965 kubelet[2692]: E0517 00:12:22.097878 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-522bq" podUID="c2a53ade-ef79-46ee-b2b9-636e3cc942be" May 17 00:12:33.099278 kubelet[2692]: E0517 00:12:33.098818 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-522bq" podUID="c2a53ade-ef79-46ee-b2b9-636e3cc942be" May 17 00:12:33.628771 systemd[1]: run-containerd-runc-k8s.io-1a167093409b134776f3b18cce3f7eab0acbe6d5aa45db92c1b7d79ff8c91d29-runc.8Dh96t.mount: Deactivated successfully. May 17 00:12:35.101385 kubelet[2692]: E0517 00:12:35.101293 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d874b469-vd68m" podUID="0d2cf33f-bbcd-48f5-a11a-d546875d4c3f" May 17 00:12:45.099864 kubelet[2692]: E0517 00:12:45.099470 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-522bq" podUID="c2a53ade-ef79-46ee-b2b9-636e3cc942be" May 17 00:12:48.099029 containerd[1482]: time="2025-05-17T00:12:48.098876863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:12:48.325189 containerd[1482]: time="2025-05-17T00:12:48.324740936Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:12:48.326776 containerd[1482]: time="2025-05-17T00:12:48.326698437Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:12:48.326973 containerd[1482]: time="2025-05-17T00:12:48.326879199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:12:48.327224 kubelet[2692]: E0517 00:12:48.327130 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:12:48.327701 kubelet[2692]: E0517 00:12:48.327219 2692 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:12:48.327701 kubelet[2692]: E0517 00:12:48.327368 2692 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d2c11851cda5488fbb5d694e1e602685,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xdj9s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66d874b469-vd68m_calico-system(0d2cf33f-bbcd-48f5-a11a-d546875d4c3f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:12:48.330124 containerd[1482]: time="2025-05-17T00:12:48.329848911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:12:48.566296 containerd[1482]: time="2025-05-17T00:12:48.566179215Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:12:48.567968 containerd[1482]: time="2025-05-17T00:12:48.567843673Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:12:48.567968 containerd[1482]: time="2025-05-17T00:12:48.567912874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:12:48.568271 kubelet[2692]: E0517 00:12:48.568203 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:12:48.568391 kubelet[2692]: E0517 00:12:48.568291 2692 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:12:48.568543 kubelet[2692]: E0517 00:12:48.568464 2692 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xdj9s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66d874b469-vd68m_calico-system(0d2cf33f-bbcd-48f5-a11a-d546875d4c3f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:12:48.570980 kubelet[2692]: E0517 00:12:48.570907 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-66d874b469-vd68m" podUID="0d2cf33f-bbcd-48f5-a11a-d546875d4c3f" May 17 00:12:59.099594 kubelet[2692]: E0517 00:12:59.099436 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-522bq" podUID="c2a53ade-ef79-46ee-b2b9-636e3cc942be" May 17 00:13:02.099571 kubelet[2692]: E0517 00:13:02.099506 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d874b469-vd68m" podUID="0d2cf33f-bbcd-48f5-a11a-d546875d4c3f" May 17 00:13:14.099318 containerd[1482]: time="2025-05-17T00:13:14.099156250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:13:14.328054 containerd[1482]: time="2025-05-17T00:13:14.327960688Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:13:14.329959 containerd[1482]: time="2025-05-17T00:13:14.329755953Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:13:14.329959 containerd[1482]: time="2025-05-17T00:13:14.329890155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:13:14.330508 kubelet[2692]: E0517 00:13:14.330167 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:13:14.330508 kubelet[2692]: E0517 00:13:14.330251 2692 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:13:14.331340 kubelet[2692]: E0517 00:13:14.330476 2692 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ttgj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-522bq_calico-system(c2a53ade-ef79-46ee-b2b9-636e3cc942be): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:13:14.332176 kubelet[2692]: E0517 00:13:14.332116 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-522bq" podUID="c2a53ade-ef79-46ee-b2b9-636e3cc942be" May 17 00:13:16.100067 kubelet[2692]: E0517 00:13:16.100015 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d874b469-vd68m" podUID="0d2cf33f-bbcd-48f5-a11a-d546875d4c3f" May 17 00:13:26.099510 kubelet[2692]: E0517 00:13:26.099130 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-522bq" podUID="c2a53ade-ef79-46ee-b2b9-636e3cc942be" May 17 00:13:30.099445 kubelet[2692]: E0517 00:13:30.099300 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d874b469-vd68m" podUID="0d2cf33f-bbcd-48f5-a11a-d546875d4c3f" May 17 00:13:41.100083 kubelet[2692]: E0517 00:13:41.099726 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-522bq" podUID="c2a53ade-ef79-46ee-b2b9-636e3cc942be" May 17 00:13:42.100705 kubelet[2692]: E0517 00:13:42.100571 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d874b469-vd68m" podUID="0d2cf33f-bbcd-48f5-a11a-d546875d4c3f" May 17 00:13:52.098854 kubelet[2692]: E0517 00:13:52.098733 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-522bq" podUID="c2a53ade-ef79-46ee-b2b9-636e3cc942be" May 17 00:13:56.101677 kubelet[2692]: E0517 00:13:56.101504 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d874b469-vd68m" podUID="0d2cf33f-bbcd-48f5-a11a-d546875d4c3f" May 17 00:14:01.581186 systemd[1]: Started sshd@9-188.245.126.139:22-139.178.68.195:54950.service - OpenSSH per-connection server daemon (139.178.68.195:54950). May 17 00:14:02.579685 sshd[6095]: Accepted publickey for core from 139.178.68.195 port 54950 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:02.582877 sshd[6095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:02.588649 systemd-logind[1461]: New session 8 of user core. May 17 00:14:02.596318 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 00:14:03.362351 sshd[6095]: pam_unix(sshd:session): session closed for user core May 17 00:14:03.367479 systemd-logind[1461]: Session 8 logged out. Waiting for processes to exit. May 17 00:14:03.367751 systemd[1]: sshd@9-188.245.126.139:22-139.178.68.195:54950.service: Deactivated successfully. May 17 00:14:03.371691 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:14:03.373013 systemd-logind[1461]: Removed session 8. May 17 00:14:05.100160 kubelet[2692]: E0517 00:14:05.099830 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-522bq" podUID="c2a53ade-ef79-46ee-b2b9-636e3cc942be" May 17 00:14:08.539237 systemd[1]: Started sshd@10-188.245.126.139:22-139.178.68.195:47678.service - OpenSSH per-connection server daemon (139.178.68.195:47678). May 17 00:14:09.540833 sshd[6134]: Accepted publickey for core from 139.178.68.195 port 47678 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:09.541869 sshd[6134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:09.550848 systemd-logind[1461]: New session 9 of user core. May 17 00:14:09.556351 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 00:14:10.323558 sshd[6134]: pam_unix(sshd:session): session closed for user core May 17 00:14:10.330738 systemd[1]: sshd@10-188.245.126.139:22-139.178.68.195:47678.service: Deactivated successfully. May 17 00:14:10.336092 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:14:10.337483 systemd-logind[1461]: Session 9 logged out. Waiting for processes to exit. May 17 00:14:10.342559 systemd-logind[1461]: Removed session 9. May 17 00:14:11.100733 kubelet[2692]: E0517 00:14:11.100443 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d874b469-vd68m" podUID="0d2cf33f-bbcd-48f5-a11a-d546875d4c3f" May 17 00:14:15.510588 systemd[1]: Started sshd@11-188.245.126.139:22-139.178.68.195:38008.service - OpenSSH per-connection server daemon (139.178.68.195:38008). May 17 00:14:16.502007 sshd[6170]: Accepted publickey for core from 139.178.68.195 port 38008 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:16.504585 sshd[6170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:16.510633 systemd-logind[1461]: New session 10 of user core. May 17 00:14:16.516207 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 00:14:17.103739 kubelet[2692]: E0517 00:14:17.103699 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-522bq" podUID="c2a53ade-ef79-46ee-b2b9-636e3cc942be" May 17 00:14:17.278934 sshd[6170]: pam_unix(sshd:session): session closed for user core May 17 00:14:17.283521 systemd[1]: sshd@11-188.245.126.139:22-139.178.68.195:38008.service: Deactivated successfully. May 17 00:14:17.288239 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:14:17.289400 systemd-logind[1461]: Session 10 logged out. Waiting for processes to exit. May 17 00:14:17.290832 systemd-logind[1461]: Removed session 10. May 17 00:14:17.458450 systemd[1]: Started sshd@12-188.245.126.139:22-139.178.68.195:38010.service - OpenSSH per-connection server daemon (139.178.68.195:38010). May 17 00:14:18.449982 sshd[6185]: Accepted publickey for core from 139.178.68.195 port 38010 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:18.452587 sshd[6185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:18.459420 systemd-logind[1461]: New session 11 of user core. May 17 00:14:18.464003 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 00:14:19.319315 sshd[6185]: pam_unix(sshd:session): session closed for user core May 17 00:14:19.325006 systemd[1]: sshd@12-188.245.126.139:22-139.178.68.195:38010.service: Deactivated successfully. May 17 00:14:19.328246 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:14:19.329592 systemd-logind[1461]: Session 11 logged out. Waiting for processes to exit. May 17 00:14:19.331061 systemd-logind[1461]: Removed session 11. May 17 00:14:19.492279 systemd[1]: Started sshd@13-188.245.126.139:22-139.178.68.195:38022.service - OpenSSH per-connection server daemon (139.178.68.195:38022). May 17 00:14:20.473695 sshd[6200]: Accepted publickey for core from 139.178.68.195 port 38022 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:20.474526 sshd[6200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:20.481024 systemd-logind[1461]: New session 12 of user core. May 17 00:14:20.489167 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 00:14:21.231102 sshd[6200]: pam_unix(sshd:session): session closed for user core May 17 00:14:21.237393 systemd[1]: sshd@13-188.245.126.139:22-139.178.68.195:38022.service: Deactivated successfully. May 17 00:14:21.239676 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:14:21.240885 systemd-logind[1461]: Session 12 logged out. Waiting for processes to exit. May 17 00:14:21.242532 systemd-logind[1461]: Removed session 12. May 17 00:14:26.102955 kubelet[2692]: E0517 00:14:26.102780 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d874b469-vd68m" podUID="0d2cf33f-bbcd-48f5-a11a-d546875d4c3f" May 17 00:14:26.413072 systemd[1]: Started sshd@14-188.245.126.139:22-139.178.68.195:60348.service - OpenSSH per-connection server daemon (139.178.68.195:60348). May 17 00:14:27.410489 sshd[6214]: Accepted publickey for core from 139.178.68.195 port 60348 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:27.413501 sshd[6214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:27.422885 systemd-logind[1461]: New session 13 of user core. May 17 00:14:27.426528 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 00:14:28.174244 sshd[6214]: pam_unix(sshd:session): session closed for user core May 17 00:14:28.178949 systemd[1]: sshd@14-188.245.126.139:22-139.178.68.195:60348.service: Deactivated successfully. May 17 00:14:28.181701 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:14:28.183822 systemd-logind[1461]: Session 13 logged out. Waiting for processes to exit. May 17 00:14:28.185256 systemd-logind[1461]: Removed session 13. May 17 00:14:28.357560 systemd[1]: Started sshd@15-188.245.126.139:22-139.178.68.195:60356.service - OpenSSH per-connection server daemon (139.178.68.195:60356). May 17 00:14:29.348358 sshd[6227]: Accepted publickey for core from 139.178.68.195 port 60356 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:29.351096 sshd[6227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:29.358576 systemd-logind[1461]: New session 14 of user core. May 17 00:14:29.365279 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 00:14:30.247746 sshd[6227]: pam_unix(sshd:session): session closed for user core May 17 00:14:30.255409 systemd[1]: sshd@15-188.245.126.139:22-139.178.68.195:60356.service: Deactivated successfully. May 17 00:14:30.260389 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:14:30.262512 systemd-logind[1461]: Session 14 logged out. Waiting for processes to exit. May 17 00:14:30.264172 systemd-logind[1461]: Removed session 14. May 17 00:14:30.428699 systemd[1]: Started sshd@16-188.245.126.139:22-139.178.68.195:60370.service - OpenSSH per-connection server daemon (139.178.68.195:60370). May 17 00:14:31.429413 sshd[6238]: Accepted publickey for core from 139.178.68.195 port 60370 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:31.431492 sshd[6238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:31.439921 systemd-logind[1461]: New session 15 of user core. May 17 00:14:31.444484 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 00:14:32.098536 kubelet[2692]: E0517 00:14:32.098441 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-522bq" podUID="c2a53ade-ef79-46ee-b2b9-636e3cc942be" May 17 00:14:34.372047 sshd[6238]: pam_unix(sshd:session): session closed for user core May 17 00:14:34.377684 systemd[1]: sshd@16-188.245.126.139:22-139.178.68.195:60370.service: Deactivated successfully. May 17 00:14:34.381155 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:14:34.383856 systemd-logind[1461]: Session 15 logged out. Waiting for processes to exit. May 17 00:14:34.385608 systemd-logind[1461]: Removed session 15. May 17 00:14:34.545155 systemd[1]: Started sshd@17-188.245.126.139:22-139.178.68.195:47416.service - OpenSSH per-connection server daemon (139.178.68.195:47416). May 17 00:14:35.546358 sshd[6277]: Accepted publickey for core from 139.178.68.195 port 47416 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:35.548803 sshd[6277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:35.556698 systemd-logind[1461]: New session 16 of user core. May 17 00:14:35.563605 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 00:14:36.505519 sshd[6277]: pam_unix(sshd:session): session closed for user core May 17 00:14:36.512522 systemd[1]: sshd@17-188.245.126.139:22-139.178.68.195:47416.service: Deactivated successfully. May 17 00:14:36.514550 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:14:36.518237 systemd-logind[1461]: Session 16 logged out. Waiting for processes to exit. May 17 00:14:36.520166 systemd-logind[1461]: Removed session 16. May 17 00:14:36.680969 systemd[1]: Started sshd@18-188.245.126.139:22-139.178.68.195:47426.service - OpenSSH per-connection server daemon (139.178.68.195:47426). May 17 00:14:37.678408 sshd[6332]: Accepted publickey for core from 139.178.68.195 port 47426 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:37.680498 sshd[6332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:37.686624 systemd-logind[1461]: New session 17 of user core. May 17 00:14:37.691079 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 00:14:38.464238 sshd[6332]: pam_unix(sshd:session): session closed for user core May 17 00:14:38.468496 systemd-logind[1461]: Session 17 logged out. Waiting for processes to exit. May 17 00:14:38.468935 systemd[1]: sshd@18-188.245.126.139:22-139.178.68.195:47426.service: Deactivated successfully. May 17 00:14:38.471750 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:14:38.473932 systemd-logind[1461]: Removed session 17. May 17 00:14:41.104035 kubelet[2692]: E0517 00:14:41.103565 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d874b469-vd68m" podUID="0d2cf33f-bbcd-48f5-a11a-d546875d4c3f" May 17 00:14:43.099080 kubelet[2692]: E0517 00:14:43.099008 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-522bq" podUID="c2a53ade-ef79-46ee-b2b9-636e3cc942be" May 17 00:14:43.639142 systemd[1]: Started sshd@19-188.245.126.139:22-139.178.68.195:47434.service - OpenSSH per-connection server daemon (139.178.68.195:47434). May 17 00:14:44.629202 sshd[6347]: Accepted publickey for core from 139.178.68.195 port 47434 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:44.631568 sshd[6347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:44.637019 systemd-logind[1461]: New session 18 of user core. May 17 00:14:44.642025 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 00:14:45.383819 sshd[6347]: pam_unix(sshd:session): session closed for user core May 17 00:14:45.388831 systemd[1]: sshd@19-188.245.126.139:22-139.178.68.195:47434.service: Deactivated successfully. May 17 00:14:45.390743 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:14:45.392123 systemd-logind[1461]: Session 18 logged out. Waiting for processes to exit. May 17 00:14:45.393994 systemd-logind[1461]: Removed session 18. May 17 00:14:50.559083 systemd[1]: Started sshd@20-188.245.126.139:22-139.178.68.195:44176.service - OpenSSH per-connection server daemon (139.178.68.195:44176). May 17 00:14:51.560103 sshd[6382]: Accepted publickey for core from 139.178.68.195 port 44176 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:51.562611 sshd[6382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:51.569044 systemd-logind[1461]: New session 19 of user core. May 17 00:14:51.577040 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 00:14:52.325396 sshd[6382]: pam_unix(sshd:session): session closed for user core May 17 00:14:52.331848 systemd[1]: sshd@20-188.245.126.139:22-139.178.68.195:44176.service: Deactivated successfully. May 17 00:14:52.335195 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:14:52.337661 systemd-logind[1461]: Session 19 logged out. Waiting for processes to exit. May 17 00:14:52.339032 systemd-logind[1461]: Removed session 19. May 17 00:14:54.099610 kubelet[2692]: E0517 00:14:54.099516 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d874b469-vd68m" podUID="0d2cf33f-bbcd-48f5-a11a-d546875d4c3f" May 17 00:14:57.099476 kubelet[2692]: E0517 00:14:57.099037 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-522bq" podUID="c2a53ade-ef79-46ee-b2b9-636e3cc942be" May 17 00:15:07.571801 kubelet[2692]: E0517 00:15:07.560691 2692 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:49106->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{goldmane-8f77d7b6c-522bq.184027f8387c92ac calico-system 1247 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-8f77d7b6c-522bq,UID:c2a53ade-ef79-46ee-b2b9-636e3cc942be,APIVersion:v1,ResourceVersion:789,FieldPath:spec.containers{goldmane},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\",Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-e61ddff57a,},FirstTimestamp:2025-05-17 00:10:02 +0000 UTC,LastTimestamp:2025-05-17 00:14:57.098995827 +0000 UTC m=+346.145195190,Count:19,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-e61ddff57a,}" May 17 00:15:07.799964 systemd[1]: cri-containerd-5ea77333faf87acabffc0fc4f9acff8d8ab7e7689df0ac299455dbd5401212cf.scope: Deactivated successfully. May 17 00:15:07.801553 systemd[1]: cri-containerd-5ea77333faf87acabffc0fc4f9acff8d8ab7e7689df0ac299455dbd5401212cf.scope: Consumed 24.131s CPU time. May 17 00:15:07.825920 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ea77333faf87acabffc0fc4f9acff8d8ab7e7689df0ac299455dbd5401212cf-rootfs.mount: Deactivated successfully. May 17 00:15:07.826242 containerd[1482]: time="2025-05-17T00:15:07.826182213Z" level=info msg="shim disconnected" id=5ea77333faf87acabffc0fc4f9acff8d8ab7e7689df0ac299455dbd5401212cf namespace=k8s.io May 17 00:15:07.827059 containerd[1482]: time="2025-05-17T00:15:07.826274095Z" level=warning msg="cleaning up after shim disconnected" id=5ea77333faf87acabffc0fc4f9acff8d8ab7e7689df0ac299455dbd5401212cf namespace=k8s.io May 17 00:15:07.827059 containerd[1482]: time="2025-05-17T00:15:07.826285935Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:15:08.099011 kubelet[2692]: E0517 00:15:08.098665 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d874b469-vd68m" podUID="0d2cf33f-bbcd-48f5-a11a-d546875d4c3f" May 17 00:15:08.267745 kubelet[2692]: E0517 00:15:08.267177 2692 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:49300->10.0.0.2:2379: read: connection timed out" May 17 00:15:08.274136 systemd[1]: cri-containerd-c195f1da29771002b3c0e423468773757537820b96cd46d4e8097503eb7380fd.scope: Deactivated successfully. May 17 00:15:08.276137 systemd[1]: cri-containerd-c195f1da29771002b3c0e423468773757537820b96cd46d4e8097503eb7380fd.scope: Consumed 2.730s CPU time, 15.4M memory peak, 0B memory swap peak. May 17 00:15:08.299915 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c195f1da29771002b3c0e423468773757537820b96cd46d4e8097503eb7380fd-rootfs.mount: Deactivated successfully. May 17 00:15:08.308150 containerd[1482]: time="2025-05-17T00:15:08.307838844Z" level=info msg="shim disconnected" id=c195f1da29771002b3c0e423468773757537820b96cd46d4e8097503eb7380fd namespace=k8s.io May 17 00:15:08.308150 containerd[1482]: time="2025-05-17T00:15:08.308148330Z" level=warning msg="cleaning up after shim disconnected" id=c195f1da29771002b3c0e423468773757537820b96cd46d4e8097503eb7380fd namespace=k8s.io May 17 00:15:08.308451 containerd[1482]: time="2025-05-17T00:15:08.308162531Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:15:08.324912 containerd[1482]: time="2025-05-17T00:15:08.324843464Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:15:08Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 17 00:15:08.360410 systemd[1]: cri-containerd-30d9e051209ed3fc4d70818d3b1aa26d81f59527d073e80890b6fa9afe398140.scope: Deactivated successfully. May 17 00:15:08.361155 systemd[1]: cri-containerd-30d9e051209ed3fc4d70818d3b1aa26d81f59527d073e80890b6fa9afe398140.scope: Consumed 6.476s CPU time, 17.6M memory peak, 0B memory swap peak. May 17 00:15:08.390401 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30d9e051209ed3fc4d70818d3b1aa26d81f59527d073e80890b6fa9afe398140-rootfs.mount: Deactivated successfully. May 17 00:15:08.396267 kubelet[2692]: I0517 00:15:08.396226 2692 scope.go:117] "RemoveContainer" containerID="c195f1da29771002b3c0e423468773757537820b96cd46d4e8097503eb7380fd" May 17 00:15:08.399438 kubelet[2692]: I0517 00:15:08.399389 2692 scope.go:117] "RemoveContainer" containerID="5ea77333faf87acabffc0fc4f9acff8d8ab7e7689df0ac299455dbd5401212cf" May 17 00:15:08.400046 containerd[1482]: time="2025-05-17T00:15:08.399612600Z" level=info msg="CreateContainer within sandbox \"c1f09a6b1468910c29c41a2ba2731c5222ac5386fff8f7c93db4335a39487c60\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 17 00:15:08.402717 containerd[1482]: time="2025-05-17T00:15:08.402460537Z" level=info msg="CreateContainer within sandbox \"7b073911728d0a300ba7dc9982b85c3bf71dd6def0499491e191182dd757ea99\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" May 17 00:15:08.420865 containerd[1482]: time="2025-05-17T00:15:08.420509338Z" level=info msg="CreateContainer within sandbox \"7b073911728d0a300ba7dc9982b85c3bf71dd6def0499491e191182dd757ea99\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"9b07cfc161183feb2d3451a73fbb509e67ee8354b50a89c8659b519212e693ff\"" May 17 00:15:08.421262 containerd[1482]: time="2025-05-17T00:15:08.421238032Z" level=info msg="StartContainer for \"9b07cfc161183feb2d3451a73fbb509e67ee8354b50a89c8659b519212e693ff\"" May 17 00:15:08.428836 containerd[1482]: time="2025-05-17T00:15:08.428663141Z" level=info msg="CreateContainer within sandbox \"c1f09a6b1468910c29c41a2ba2731c5222ac5386fff8f7c93db4335a39487c60\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"250e8b6de3e6a15545c6929c37ece1b4cc8f2d505c452e71981afc49e3a9a7ab\"" May 17 00:15:08.430472 containerd[1482]: time="2025-05-17T00:15:08.430398496Z" level=info msg="StartContainer for \"250e8b6de3e6a15545c6929c37ece1b4cc8f2d505c452e71981afc49e3a9a7ab\"" May 17 00:15:08.431222 containerd[1482]: time="2025-05-17T00:15:08.431110430Z" level=info msg="shim disconnected" id=30d9e051209ed3fc4d70818d3b1aa26d81f59527d073e80890b6fa9afe398140 namespace=k8s.io May 17 00:15:08.431410 containerd[1482]: time="2025-05-17T00:15:08.431158471Z" level=warning msg="cleaning up after shim disconnected" id=30d9e051209ed3fc4d70818d3b1aa26d81f59527d073e80890b6fa9afe398140 namespace=k8s.io May 17 00:15:08.431410 containerd[1482]: time="2025-05-17T00:15:08.431268713Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:15:08.464584 systemd[1]: Started cri-containerd-9b07cfc161183feb2d3451a73fbb509e67ee8354b50a89c8659b519212e693ff.scope - libcontainer container 9b07cfc161183feb2d3451a73fbb509e67ee8354b50a89c8659b519212e693ff. May 17 00:15:08.488994 systemd[1]: Started cri-containerd-250e8b6de3e6a15545c6929c37ece1b4cc8f2d505c452e71981afc49e3a9a7ab.scope - libcontainer container 250e8b6de3e6a15545c6929c37ece1b4cc8f2d505c452e71981afc49e3a9a7ab. May 17 00:15:08.529678 containerd[1482]: time="2025-05-17T00:15:08.529589360Z" level=info msg="StartContainer for \"9b07cfc161183feb2d3451a73fbb509e67ee8354b50a89c8659b519212e693ff\" returns successfully" May 17 00:15:08.549186 containerd[1482]: time="2025-05-17T00:15:08.548745863Z" level=info msg="StartContainer for \"250e8b6de3e6a15545c6929c37ece1b4cc8f2d505c452e71981afc49e3a9a7ab\" returns successfully" May 17 00:15:09.405402 kubelet[2692]: I0517 00:15:09.405367 2692 scope.go:117] "RemoveContainer" containerID="30d9e051209ed3fc4d70818d3b1aa26d81f59527d073e80890b6fa9afe398140" May 17 00:15:09.408755 containerd[1482]: time="2025-05-17T00:15:09.408711432Z" level=info msg="CreateContainer within sandbox \"28f7da874c647bd4dbbdf9388053f2af900f882482cad813a241ecdc370b8bcc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 17 00:15:09.429073 containerd[1482]: time="2025-05-17T00:15:09.429019438Z" level=info msg="CreateContainer within sandbox \"28f7da874c647bd4dbbdf9388053f2af900f882482cad813a241ecdc370b8bcc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"530dbb928e98957b21afefc08125d386de630c263ede1bcf8051adbeb9788977\"" May 17 00:15:09.429617 containerd[1482]: time="2025-05-17T00:15:09.429583530Z" level=info msg="StartContainer for \"530dbb928e98957b21afefc08125d386de630c263ede1bcf8051adbeb9788977\"" May 17 00:15:09.489976 systemd[1]: Started cri-containerd-530dbb928e98957b21afefc08125d386de630c263ede1bcf8051adbeb9788977.scope - libcontainer container 530dbb928e98957b21afefc08125d386de630c263ede1bcf8051adbeb9788977. May 17 00:15:09.538085 containerd[1482]: time="2025-05-17T00:15:09.538016701Z" level=info msg="StartContainer for \"530dbb928e98957b21afefc08125d386de630c263ede1bcf8051adbeb9788977\" returns successfully" May 17 00:15:10.098442 kubelet[2692]: E0517 00:15:10.098373 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-522bq" podUID="c2a53ade-ef79-46ee-b2b9-636e3cc942be"