May 17 00:06:41.900234 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 17 00:06:41.900261 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri May 16 22:39:35 -00 2025 May 17 00:06:41.900272 kernel: KASLR enabled May 17 00:06:41.900278 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II May 17 00:06:41.900284 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 May 17 00:06:41.900289 kernel: random: crng init done May 17 00:06:41.900297 kernel: ACPI: Early table checksum verification disabled May 17 00:06:41.900303 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) May 17 00:06:41.900309 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) May 17 00:06:41.900317 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:06:41.900323 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:06:41.900329 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:06:41.900335 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:06:41.900341 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:06:41.900349 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:06:41.900357 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:06:41.900364 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:06:41.900370 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:06:41.900377 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) May 17 00:06:41.900383 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 May 17 00:06:41.900390 kernel: NUMA: Failed to initialise from firmware May 17 00:06:41.900396 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] May 17 00:06:41.900403 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] May 17 00:06:41.900409 kernel: Zone ranges: May 17 00:06:41.900415 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] May 17 00:06:41.900423 kernel: DMA32 empty May 17 00:06:41.900429 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] May 17 00:06:41.902579 kernel: Movable zone start for each node May 17 00:06:41.902599 kernel: Early memory node ranges May 17 00:06:41.902605 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] May 17 00:06:41.902612 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] May 17 00:06:41.902619 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] May 17 00:06:41.902625 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] May 17 00:06:41.902631 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] May 17 00:06:41.902638 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] May 17 00:06:41.902644 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] May 17 00:06:41.902651 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] May 17 00:06:41.902663 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges May 17 00:06:41.902670 kernel: psci: probing for conduit method from ACPI. May 17 00:06:41.902677 kernel: psci: PSCIv1.1 detected in firmware. May 17 00:06:41.902686 kernel: psci: Using standard PSCI v0.2 function IDs May 17 00:06:41.902693 kernel: psci: Trusted OS migration not required May 17 00:06:41.902700 kernel: psci: SMC Calling Convention v1.1 May 17 00:06:41.902708 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 17 00:06:41.902715 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 17 00:06:41.902722 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 17 00:06:41.902729 kernel: pcpu-alloc: [0] 0 [0] 1 May 17 00:06:41.902736 kernel: Detected PIPT I-cache on CPU0 May 17 00:06:41.902742 kernel: CPU features: detected: GIC system register CPU interface May 17 00:06:41.902749 kernel: CPU features: detected: Hardware dirty bit management May 17 00:06:41.902756 kernel: CPU features: detected: Spectre-v4 May 17 00:06:41.902763 kernel: CPU features: detected: Spectre-BHB May 17 00:06:41.902770 kernel: CPU features: kernel page table isolation forced ON by KASLR May 17 00:06:41.902778 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 17 00:06:41.902785 kernel: CPU features: detected: ARM erratum 1418040 May 17 00:06:41.902792 kernel: CPU features: detected: SSBS not fully self-synchronizing May 17 00:06:41.902799 kernel: alternatives: applying boot alternatives May 17 00:06:41.902808 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=3554ca41327a0c5ba7e4ac1b3147487d73f35805806dcb20264133a9c301eb5d May 17 00:06:41.902815 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:06:41.902822 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:06:41.902829 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:06:41.902836 kernel: Fallback order for Node 0: 0 May 17 00:06:41.902843 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 May 17 00:06:41.902849 kernel: Policy zone: Normal May 17 00:06:41.902858 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:06:41.902865 kernel: software IO TLB: area num 2. May 17 00:06:41.902872 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) May 17 00:06:41.902879 kernel: Memory: 3882868K/4096000K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 213132K reserved, 0K cma-reserved) May 17 00:06:41.902886 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:06:41.902893 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:06:41.902900 kernel: rcu: RCU event tracing is enabled. May 17 00:06:41.902907 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:06:41.902914 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:06:41.902921 kernel: Tracing variant of Tasks RCU enabled. May 17 00:06:41.902928 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:06:41.902936 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:06:41.902943 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 17 00:06:41.902950 kernel: GICv3: 256 SPIs implemented May 17 00:06:41.902957 kernel: GICv3: 0 Extended SPIs implemented May 17 00:06:41.902963 kernel: Root IRQ handler: gic_handle_irq May 17 00:06:41.902970 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 17 00:06:41.902977 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 17 00:06:41.902984 kernel: ITS [mem 0x08080000-0x0809ffff] May 17 00:06:41.902991 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) May 17 00:06:41.902998 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) May 17 00:06:41.903005 kernel: GICv3: using LPI property table @0x00000001000e0000 May 17 00:06:41.903012 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 May 17 00:06:41.903020 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:06:41.903027 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:06:41.903034 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 17 00:06:41.903041 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 17 00:06:41.903048 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 17 00:06:41.903334 kernel: Console: colour dummy device 80x25 May 17 00:06:41.903345 kernel: ACPI: Core revision 20230628 May 17 00:06:41.903354 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 17 00:06:41.903361 kernel: pid_max: default: 32768 minimum: 301 May 17 00:06:41.903369 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:06:41.903384 kernel: landlock: Up and running. May 17 00:06:41.903392 kernel: SELinux: Initializing. May 17 00:06:41.903401 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:06:41.903410 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:06:41.903418 kernel: ACPI PPTT: PPTT table found, but unable to locate core 1 (1) May 17 00:06:41.903426 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:06:41.903447 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:06:41.903457 kernel: rcu: Hierarchical SRCU implementation. May 17 00:06:41.903464 kernel: rcu: Max phase no-delay instances is 400. May 17 00:06:41.903476 kernel: Platform MSI: ITS@0x8080000 domain created May 17 00:06:41.903486 kernel: PCI/MSI: ITS@0x8080000 domain created May 17 00:06:41.903494 kernel: Remapping and enabling EFI services. May 17 00:06:41.903501 kernel: smp: Bringing up secondary CPUs ... May 17 00:06:41.903509 kernel: Detected PIPT I-cache on CPU1 May 17 00:06:41.903516 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 17 00:06:41.903524 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 May 17 00:06:41.903531 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:06:41.903539 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 17 00:06:41.903549 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:06:41.903557 kernel: SMP: Total of 2 processors activated. May 17 00:06:41.903564 kernel: CPU features: detected: 32-bit EL0 Support May 17 00:06:41.903578 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 17 00:06:41.903587 kernel: CPU features: detected: Common not Private translations May 17 00:06:41.903595 kernel: CPU features: detected: CRC32 instructions May 17 00:06:41.903603 kernel: CPU features: detected: Enhanced Virtualization Traps May 17 00:06:41.903611 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 17 00:06:41.903619 kernel: CPU features: detected: LSE atomic instructions May 17 00:06:41.903627 kernel: CPU features: detected: Privileged Access Never May 17 00:06:41.903635 kernel: CPU features: detected: RAS Extension Support May 17 00:06:41.903644 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 17 00:06:41.903652 kernel: CPU: All CPU(s) started at EL1 May 17 00:06:41.903660 kernel: alternatives: applying system-wide alternatives May 17 00:06:41.903668 kernel: devtmpfs: initialized May 17 00:06:41.903676 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:06:41.903684 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:06:41.903693 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:06:41.903701 kernel: SMBIOS 3.0.0 present. May 17 00:06:41.903709 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 May 17 00:06:41.903717 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:06:41.903725 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 17 00:06:41.903733 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 17 00:06:41.903741 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 17 00:06:41.903749 kernel: audit: initializing netlink subsys (disabled) May 17 00:06:41.903757 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 May 17 00:06:41.903766 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:06:41.903774 kernel: cpuidle: using governor menu May 17 00:06:41.903782 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 17 00:06:41.903790 kernel: ASID allocator initialised with 32768 entries May 17 00:06:41.903798 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:06:41.903806 kernel: Serial: AMBA PL011 UART driver May 17 00:06:41.903814 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 17 00:06:41.903822 kernel: Modules: 0 pages in range for non-PLT usage May 17 00:06:41.903830 kernel: Modules: 509024 pages in range for PLT usage May 17 00:06:41.903839 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:06:41.903847 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:06:41.903855 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 17 00:06:41.903863 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 17 00:06:41.903871 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:06:41.903879 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:06:41.903886 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 17 00:06:41.903894 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 17 00:06:41.903902 kernel: ACPI: Added _OSI(Module Device) May 17 00:06:41.903912 kernel: ACPI: Added _OSI(Processor Device) May 17 00:06:41.903919 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:06:41.903927 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:06:41.903935 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:06:41.903943 kernel: ACPI: Interpreter enabled May 17 00:06:41.903950 kernel: ACPI: Using GIC for interrupt routing May 17 00:06:41.903958 kernel: ACPI: MCFG table detected, 1 entries May 17 00:06:41.903966 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 17 00:06:41.903974 kernel: printk: console [ttyAMA0] enabled May 17 00:06:41.903982 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:06:41.904175 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:06:41.904258 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 17 00:06:41.904331 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 17 00:06:41.904400 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 17 00:06:41.906602 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 17 00:06:41.906628 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 17 00:06:41.906643 kernel: PCI host bridge to bus 0000:00 May 17 00:06:41.906727 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 17 00:06:41.906788 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 17 00:06:41.906850 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 17 00:06:41.906908 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:06:41.906992 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 17 00:06:41.907099 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 May 17 00:06:41.907180 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] May 17 00:06:41.907249 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] May 17 00:06:41.907324 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 May 17 00:06:41.907397 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] May 17 00:06:41.907489 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 May 17 00:06:41.907560 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] May 17 00:06:41.907638 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 May 17 00:06:41.907706 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] May 17 00:06:41.907781 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 May 17 00:06:41.907849 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] May 17 00:06:41.907922 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 May 17 00:06:41.907999 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] May 17 00:06:41.908131 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 May 17 00:06:41.908209 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] May 17 00:06:41.908285 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 May 17 00:06:41.908352 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] May 17 00:06:41.908433 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 May 17 00:06:41.908775 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] May 17 00:06:41.908861 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 May 17 00:06:41.908930 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] May 17 00:06:41.909006 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 May 17 00:06:41.909095 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] May 17 00:06:41.909195 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 May 17 00:06:41.909280 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] May 17 00:06:41.909362 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 17 00:06:41.909452 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] May 17 00:06:41.909536 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 May 17 00:06:41.909638 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] May 17 00:06:41.909718 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 May 17 00:06:41.909790 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] May 17 00:06:41.909859 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] May 17 00:06:41.909936 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 May 17 00:06:41.910011 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] May 17 00:06:41.910145 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 May 17 00:06:41.910220 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] May 17 00:06:41.910373 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] May 17 00:06:41.910862 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 May 17 00:06:41.910950 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] May 17 00:06:41.913599 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] May 17 00:06:41.913697 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 May 17 00:06:41.913771 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] May 17 00:06:41.913840 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] May 17 00:06:41.913909 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] May 17 00:06:41.913983 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:06:41.914074 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 May 17 00:06:41.914147 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 May 17 00:06:41.914220 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:06:41.914289 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:06:41.914356 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 May 17 00:06:41.914427 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:06:41.916415 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 May 17 00:06:41.916529 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 17 00:06:41.916610 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:06:41.916683 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 May 17 00:06:41.916751 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 00:06:41.916822 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 May 17 00:06:41.916888 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 May 17 00:06:41.916953 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 May 17 00:06:41.917023 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 17 00:06:41.919836 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 May 17 00:06:41.919922 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 May 17 00:06:41.919994 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 17 00:06:41.920077 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 May 17 00:06:41.920148 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 May 17 00:06:41.920219 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 17 00:06:41.920285 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 May 17 00:06:41.920363 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 May 17 00:06:41.920451 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 17 00:06:41.920523 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 May 17 00:06:41.920590 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 May 17 00:06:41.920660 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] May 17 00:06:41.920728 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] May 17 00:06:41.920798 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] May 17 00:06:41.920871 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] May 17 00:06:41.920940 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] May 17 00:06:41.921007 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] May 17 00:06:41.921133 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] May 17 00:06:41.921209 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] May 17 00:06:41.921283 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] May 17 00:06:41.921349 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] May 17 00:06:41.921423 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] May 17 00:06:41.921589 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] May 17 00:06:41.921667 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] May 17 00:06:41.921736 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] May 17 00:06:41.921803 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] May 17 00:06:41.921870 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] May 17 00:06:41.921939 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] May 17 00:06:41.922017 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] May 17 00:06:41.922106 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] May 17 00:06:41.922177 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] May 17 00:06:41.922245 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] May 17 00:06:41.922312 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] May 17 00:06:41.922380 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] May 17 00:06:41.922465 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] May 17 00:06:41.922535 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] May 17 00:06:41.922610 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] May 17 00:06:41.922680 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] May 17 00:06:41.922749 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] May 17 00:06:41.922818 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] May 17 00:06:41.922886 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] May 17 00:06:41.922952 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] May 17 00:06:41.923019 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] May 17 00:06:41.923107 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] May 17 00:06:41.923182 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] May 17 00:06:41.923250 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] May 17 00:06:41.923319 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] May 17 00:06:41.923386 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] May 17 00:06:41.923479 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] May 17 00:06:41.923557 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] May 17 00:06:41.923637 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] May 17 00:06:41.923710 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 17 00:06:41.923787 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] May 17 00:06:41.923854 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 17 00:06:41.923921 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] May 17 00:06:41.923988 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] May 17 00:06:41.924097 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] May 17 00:06:41.924188 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] May 17 00:06:41.924265 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 17 00:06:41.924333 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] May 17 00:06:41.924400 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] May 17 00:06:41.924517 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] May 17 00:06:41.924600 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] May 17 00:06:41.924671 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] May 17 00:06:41.924744 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 17 00:06:41.924812 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] May 17 00:06:41.924878 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] May 17 00:06:41.924945 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] May 17 00:06:41.925021 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] May 17 00:06:41.925105 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 17 00:06:41.925177 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] May 17 00:06:41.925245 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] May 17 00:06:41.925316 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] May 17 00:06:41.925391 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] May 17 00:06:41.925709 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] May 17 00:06:41.925790 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 17 00:06:41.925859 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] May 17 00:06:41.925924 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] May 17 00:06:41.925988 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] May 17 00:06:41.926080 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] May 17 00:06:41.926161 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] May 17 00:06:41.926228 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 17 00:06:41.926292 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] May 17 00:06:41.926358 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] May 17 00:06:41.926423 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] May 17 00:06:41.926535 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] May 17 00:06:41.926606 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] May 17 00:06:41.926680 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] May 17 00:06:41.926745 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 17 00:06:41.926816 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] May 17 00:06:41.926880 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] May 17 00:06:41.926946 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] May 17 00:06:41.927013 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 17 00:06:41.927119 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] May 17 00:06:41.927190 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] May 17 00:06:41.927260 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] May 17 00:06:41.927328 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 17 00:06:41.927398 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] May 17 00:06:41.927552 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] May 17 00:06:41.927623 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] May 17 00:06:41.927691 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 17 00:06:41.927750 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 17 00:06:41.927808 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 17 00:06:41.927884 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] May 17 00:06:41.927946 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] May 17 00:06:41.928005 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] May 17 00:06:41.928098 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] May 17 00:06:41.928163 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] May 17 00:06:41.928222 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] May 17 00:06:41.928291 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] May 17 00:06:41.928359 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] May 17 00:06:41.928431 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] May 17 00:06:41.928603 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] May 17 00:06:41.928672 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] May 17 00:06:41.928733 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] May 17 00:06:41.928801 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] May 17 00:06:41.928869 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] May 17 00:06:41.928929 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] May 17 00:06:41.928999 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] May 17 00:06:41.929075 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] May 17 00:06:41.929148 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] May 17 00:06:41.929218 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] May 17 00:06:41.929279 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] May 17 00:06:41.929340 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] May 17 00:06:41.929408 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] May 17 00:06:41.929610 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] May 17 00:06:41.929681 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] May 17 00:06:41.929760 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] May 17 00:06:41.929822 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] May 17 00:06:41.929882 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] May 17 00:06:41.929892 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 17 00:06:41.929903 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 17 00:06:41.929911 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 17 00:06:41.929919 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 17 00:06:41.929927 kernel: iommu: Default domain type: Translated May 17 00:06:41.929936 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 17 00:06:41.929944 kernel: efivars: Registered efivars operations May 17 00:06:41.929952 kernel: vgaarb: loaded May 17 00:06:41.929960 kernel: clocksource: Switched to clocksource arch_sys_counter May 17 00:06:41.929967 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:06:41.929976 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:06:41.929984 kernel: pnp: PnP ACPI init May 17 00:06:41.930115 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 17 00:06:41.930135 kernel: pnp: PnP ACPI: found 1 devices May 17 00:06:41.930143 kernel: NET: Registered PF_INET protocol family May 17 00:06:41.930151 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:06:41.930159 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 00:06:41.930167 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:06:41.930175 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:06:41.930184 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 17 00:06:41.930192 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 00:06:41.930200 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:06:41.930210 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:06:41.930218 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:06:41.930301 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) May 17 00:06:41.930313 kernel: PCI: CLS 0 bytes, default 64 May 17 00:06:41.930321 kernel: kvm [1]: HYP mode not available May 17 00:06:41.930329 kernel: Initialise system trusted keyrings May 17 00:06:41.930337 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 00:06:41.930345 kernel: Key type asymmetric registered May 17 00:06:41.930353 kernel: Asymmetric key parser 'x509' registered May 17 00:06:41.930363 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 17 00:06:41.930371 kernel: io scheduler mq-deadline registered May 17 00:06:41.930379 kernel: io scheduler kyber registered May 17 00:06:41.930387 kernel: io scheduler bfq registered May 17 00:06:41.930395 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 May 17 00:06:41.930572 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 May 17 00:06:41.930648 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 May 17 00:06:41.930715 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:06:41.930790 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 May 17 00:06:41.930856 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 May 17 00:06:41.930921 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:06:41.930989 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 May 17 00:06:41.931070 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 May 17 00:06:41.931140 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:06:41.931214 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 May 17 00:06:41.931282 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 May 17 00:06:41.931347 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:06:41.931415 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 May 17 00:06:41.931499 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 May 17 00:06:41.931568 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:06:41.931640 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 May 17 00:06:41.931706 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 May 17 00:06:41.931772 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:06:41.931843 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 May 17 00:06:41.931911 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 May 17 00:06:41.931988 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:06:41.932074 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 May 17 00:06:41.932146 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 May 17 00:06:41.932213 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:06:41.932224 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 May 17 00:06:41.932290 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 May 17 00:06:41.932359 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 May 17 00:06:41.932429 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:06:41.932452 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 17 00:06:41.932473 kernel: ACPI: button: Power Button [PWRB] May 17 00:06:41.932482 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 17 00:06:41.932562 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) May 17 00:06:41.932644 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) May 17 00:06:41.932656 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:06:41.932664 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 May 17 00:06:41.932736 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) May 17 00:06:41.932748 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A May 17 00:06:41.932756 kernel: thunder_xcv, ver 1.0 May 17 00:06:41.932764 kernel: thunder_bgx, ver 1.0 May 17 00:06:41.932772 kernel: nicpf, ver 1.0 May 17 00:06:41.932783 kernel: nicvf, ver 1.0 May 17 00:06:41.932862 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 17 00:06:41.932927 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-17T00:06:41 UTC (1747440401) May 17 00:06:41.932940 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 00:06:41.932948 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 17 00:06:41.932956 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 17 00:06:41.932964 kernel: watchdog: Hard watchdog permanently disabled May 17 00:06:41.932972 kernel: NET: Registered PF_INET6 protocol family May 17 00:06:41.932979 kernel: Segment Routing with IPv6 May 17 00:06:41.932987 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:06:41.932995 kernel: NET: Registered PF_PACKET protocol family May 17 00:06:41.933003 kernel: Key type dns_resolver registered May 17 00:06:41.933012 kernel: registered taskstats version 1 May 17 00:06:41.933020 kernel: Loading compiled-in X.509 certificates May 17 00:06:41.933028 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 02f7129968574a1ae76b1ee42e7674ea1c42071b' May 17 00:06:41.933036 kernel: Key type .fscrypt registered May 17 00:06:41.933043 kernel: Key type fscrypt-provisioning registered May 17 00:06:41.933094 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:06:41.933104 kernel: ima: Allocated hash algorithm: sha1 May 17 00:06:41.933112 kernel: ima: No architecture policies found May 17 00:06:41.933120 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 17 00:06:41.933131 kernel: clk: Disabling unused clocks May 17 00:06:41.933139 kernel: Freeing unused kernel memory: 39424K May 17 00:06:41.933147 kernel: Run /init as init process May 17 00:06:41.933155 kernel: with arguments: May 17 00:06:41.933163 kernel: /init May 17 00:06:41.933170 kernel: with environment: May 17 00:06:41.933178 kernel: HOME=/ May 17 00:06:41.933186 kernel: TERM=linux May 17 00:06:41.933193 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:06:41.933205 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:06:41.933215 systemd[1]: Detected virtualization kvm. May 17 00:06:41.933224 systemd[1]: Detected architecture arm64. May 17 00:06:41.933235 systemd[1]: Running in initrd. May 17 00:06:41.933243 systemd[1]: No hostname configured, using default hostname. May 17 00:06:41.933251 systemd[1]: Hostname set to . May 17 00:06:41.933260 systemd[1]: Initializing machine ID from VM UUID. May 17 00:06:41.933270 systemd[1]: Queued start job for default target initrd.target. May 17 00:06:41.933278 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:06:41.933287 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:06:41.933295 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:06:41.933304 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:06:41.933312 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:06:41.933320 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:06:41.933331 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:06:41.933340 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:06:41.933348 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:06:41.933357 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:06:41.933365 systemd[1]: Reached target paths.target - Path Units. May 17 00:06:41.933374 systemd[1]: Reached target slices.target - Slice Units. May 17 00:06:41.933382 systemd[1]: Reached target swap.target - Swaps. May 17 00:06:41.933390 systemd[1]: Reached target timers.target - Timer Units. May 17 00:06:41.933400 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:06:41.933409 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:06:41.933417 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:06:41.933426 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:06:41.934554 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:06:41.934586 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:06:41.934596 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:06:41.934611 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:06:41.934620 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:06:41.934631 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:06:41.934639 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:06:41.934648 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:06:41.934656 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:06:41.934664 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:06:41.934673 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:06:41.934681 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:06:41.934723 systemd-journald[236]: Collecting audit messages is disabled. May 17 00:06:41.934747 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:06:41.934756 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:06:41.934767 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:06:41.934775 kernel: Bridge firewalling registered May 17 00:06:41.934784 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:06:41.934792 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:06:41.934801 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:06:41.934809 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:06:41.934818 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:06:41.934828 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:06:41.934838 systemd-journald[236]: Journal started May 17 00:06:41.934857 systemd-journald[236]: Runtime Journal (/run/log/journal/4d496235578948428fdd418f1d60d61e) is 8.0M, max 76.6M, 68.6M free. May 17 00:06:41.939937 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:06:41.894813 systemd-modules-load[237]: Inserted module 'overlay' May 17 00:06:41.912993 systemd-modules-load[237]: Inserted module 'br_netfilter' May 17 00:06:41.942177 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:06:41.947861 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:06:41.957115 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:06:41.959961 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:06:41.961788 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:06:41.966702 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:06:41.969685 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:06:41.976690 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:06:41.980541 dracut-cmdline[270]: dracut-dracut-053 May 17 00:06:41.982248 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=3554ca41327a0c5ba7e4ac1b3147487d73f35805806dcb20264133a9c301eb5d May 17 00:06:42.013968 systemd-resolved[278]: Positive Trust Anchors: May 17 00:06:42.014619 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:06:42.014654 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:06:42.025579 systemd-resolved[278]: Defaulting to hostname 'linux'. May 17 00:06:42.027195 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:06:42.027892 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:06:42.083494 kernel: SCSI subsystem initialized May 17 00:06:42.088481 kernel: Loading iSCSI transport class v2.0-870. May 17 00:06:42.098480 kernel: iscsi: registered transport (tcp) May 17 00:06:42.112462 kernel: iscsi: registered transport (qla4xxx) May 17 00:06:42.112529 kernel: QLogic iSCSI HBA Driver May 17 00:06:42.161273 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:06:42.166641 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:06:42.186574 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:06:42.186711 kernel: device-mapper: uevent: version 1.0.3 May 17 00:06:42.186738 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:06:42.239521 kernel: raid6: neonx8 gen() 15676 MB/s May 17 00:06:42.256492 kernel: raid6: neonx4 gen() 15578 MB/s May 17 00:06:42.273519 kernel: raid6: neonx2 gen() 13122 MB/s May 17 00:06:42.290510 kernel: raid6: neonx1 gen() 10423 MB/s May 17 00:06:42.307487 kernel: raid6: int64x8 gen() 6912 MB/s May 17 00:06:42.324505 kernel: raid6: int64x4 gen() 7313 MB/s May 17 00:06:42.341484 kernel: raid6: int64x2 gen() 6105 MB/s May 17 00:06:42.358590 kernel: raid6: int64x1 gen() 5025 MB/s May 17 00:06:42.358701 kernel: raid6: using algorithm neonx8 gen() 15676 MB/s May 17 00:06:42.375501 kernel: raid6: .... xor() 11861 MB/s, rmw enabled May 17 00:06:42.375568 kernel: raid6: using neon recovery algorithm May 17 00:06:42.380660 kernel: xor: measuring software checksum speed May 17 00:06:42.380711 kernel: 8regs : 19745 MB/sec May 17 00:06:42.380737 kernel: 32regs : 19664 MB/sec May 17 00:06:42.381482 kernel: arm64_neon : 27034 MB/sec May 17 00:06:42.381516 kernel: xor: using function: arm64_neon (27034 MB/sec) May 17 00:06:42.432524 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:06:42.447494 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:06:42.454629 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:06:42.468561 systemd-udevd[456]: Using default interface naming scheme 'v255'. May 17 00:06:42.471986 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:06:42.481113 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:06:42.496708 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation May 17 00:06:42.538231 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:06:42.545693 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:06:42.595701 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:06:42.603850 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:06:42.627403 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:06:42.628548 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:06:42.630954 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:06:42.631911 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:06:42.636659 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:06:42.662379 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:06:42.698487 kernel: scsi host0: Virtio SCSI HBA May 17 00:06:42.710509 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 May 17 00:06:42.711468 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 17 00:06:42.721791 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:06:42.721917 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:06:42.726624 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:06:42.728696 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:06:42.728863 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:06:42.729516 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:06:42.735694 kernel: ACPI: bus type USB registered May 17 00:06:42.735733 kernel: usbcore: registered new interface driver usbfs May 17 00:06:42.735744 kernel: usbcore: registered new interface driver hub May 17 00:06:42.735753 kernel: usbcore: registered new device driver usb May 17 00:06:42.737705 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:06:42.750786 kernel: sr 0:0:0:0: Power-on or device reset occurred May 17 00:06:42.750973 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray May 17 00:06:42.751547 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 00:06:42.752470 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 May 17 00:06:42.761671 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:06:42.767609 kernel: sd 0:0:0:1: Power-on or device reset occurred May 17 00:06:42.767806 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) May 17 00:06:42.769477 kernel: sd 0:0:0:1: [sda] Write Protect is off May 17 00:06:42.769644 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 May 17 00:06:42.769741 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 00:06:42.770668 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:06:42.775517 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:06:42.775539 kernel: GPT:17805311 != 80003071 May 17 00:06:42.775549 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:06:42.775559 kernel: GPT:17805311 != 80003071 May 17 00:06:42.775568 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:06:42.775585 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:06:42.775595 kernel: sd 0:0:0:1: [sda] Attached SCSI disk May 17 00:06:42.792891 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 17 00:06:42.793119 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 May 17 00:06:42.795463 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 May 17 00:06:42.805035 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:06:42.813717 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 17 00:06:42.814075 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 May 17 00:06:42.814339 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed May 17 00:06:42.823790 kernel: hub 1-0:1.0: USB hub found May 17 00:06:42.823983 kernel: hub 1-0:1.0: 4 ports detected May 17 00:06:42.825662 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 17 00:06:42.825719 kernel: BTRFS: device fsid 4797bc80-d55e-4b4a-8ede-cb88964b0162 devid 1 transid 43 /dev/sda3 scanned by (udev-worker) (510) May 17 00:06:42.827456 kernel: hub 2-0:1.0: USB hub found May 17 00:06:42.827630 kernel: hub 2-0:1.0: 4 ports detected May 17 00:06:42.830021 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (507) May 17 00:06:42.832655 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 17 00:06:42.838964 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 17 00:06:42.852750 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 17 00:06:42.856939 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 17 00:06:42.857941 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 17 00:06:42.864702 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:06:42.872692 disk-uuid[571]: Primary Header is updated. May 17 00:06:42.872692 disk-uuid[571]: Secondary Entries is updated. May 17 00:06:42.872692 disk-uuid[571]: Secondary Header is updated. May 17 00:06:42.886483 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:06:43.063650 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd May 17 00:06:43.200511 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 May 17 00:06:43.200599 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 May 17 00:06:43.201852 kernel: usbcore: registered new interface driver usbhid May 17 00:06:43.201913 kernel: usbhid: USB HID core driver May 17 00:06:43.306481 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd May 17 00:06:43.433467 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 May 17 00:06:43.486458 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 May 17 00:06:43.899246 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:06:43.899306 disk-uuid[572]: The operation has completed successfully. May 17 00:06:43.953176 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:06:43.953300 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:06:43.967631 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:06:43.980123 sh[589]: Success May 17 00:06:43.993470 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 17 00:06:44.059521 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:06:44.068629 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:06:44.069352 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:06:44.086602 kernel: BTRFS info (device dm-0): first mount of filesystem 4797bc80-d55e-4b4a-8ede-cb88964b0162 May 17 00:06:44.086715 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 17 00:06:44.086743 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:06:44.089506 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:06:44.089625 kernel: BTRFS info (device dm-0): using free space tree May 17 00:06:44.097478 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 00:06:44.100013 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:06:44.100725 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:06:44.106702 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:06:44.110159 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:06:44.126184 kernel: BTRFS info (device sda6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:06:44.126244 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:06:44.126265 kernel: BTRFS info (device sda6): using free space tree May 17 00:06:44.132546 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:06:44.132611 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:06:44.142116 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:06:44.143481 kernel: BTRFS info (device sda6): last unmount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:06:44.152025 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:06:44.159731 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:06:44.215136 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:06:44.223800 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:06:44.254939 systemd-networkd[771]: lo: Link UP May 17 00:06:44.254957 systemd-networkd[771]: lo: Gained carrier May 17 00:06:44.256991 systemd-networkd[771]: Enumeration completed May 17 00:06:44.257154 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:06:44.258477 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:06:44.258480 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:06:44.262749 ignition[691]: Ignition 2.19.0 May 17 00:06:44.259704 systemd[1]: Reached target network.target - Network. May 17 00:06:44.262756 ignition[691]: Stage: fetch-offline May 17 00:06:44.260763 systemd-networkd[771]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:06:44.262793 ignition[691]: no configs at "/usr/lib/ignition/base.d" May 17 00:06:44.260766 systemd-networkd[771]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:06:44.262801 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:06:44.261308 systemd-networkd[771]: eth0: Link UP May 17 00:06:44.262960 ignition[691]: parsed url from cmdline: "" May 17 00:06:44.261311 systemd-networkd[771]: eth0: Gained carrier May 17 00:06:44.262963 ignition[691]: no config URL provided May 17 00:06:44.261319 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:06:44.262968 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:06:44.264771 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:06:44.262975 ignition[691]: no config at "/usr/lib/ignition/user.ign" May 17 00:06:44.265961 systemd-networkd[771]: eth1: Link UP May 17 00:06:44.262981 ignition[691]: failed to fetch config: resource requires networking May 17 00:06:44.265965 systemd-networkd[771]: eth1: Gained carrier May 17 00:06:44.263216 ignition[691]: Ignition finished successfully May 17 00:06:44.265974 systemd-networkd[771]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:06:44.272688 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 17 00:06:44.299794 ignition[779]: Ignition 2.19.0 May 17 00:06:44.299814 ignition[779]: Stage: fetch May 17 00:06:44.300548 systemd-networkd[771]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:06:44.300209 ignition[779]: no configs at "/usr/lib/ignition/base.d" May 17 00:06:44.300231 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:06:44.301645 ignition[779]: parsed url from cmdline: "" May 17 00:06:44.301656 ignition[779]: no config URL provided May 17 00:06:44.301669 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:06:44.301693 ignition[779]: no config at "/usr/lib/ignition/user.ign" May 17 00:06:44.301733 ignition[779]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 May 17 00:06:44.303137 ignition[779]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable May 17 00:06:44.327551 systemd-networkd[771]: eth0: DHCPv4 address 168.119.99.67/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 17 00:06:44.504130 ignition[779]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 May 17 00:06:44.513613 ignition[779]: GET result: OK May 17 00:06:44.513786 ignition[779]: parsing config with SHA512: 0a423b602b92e2725285b209ea7421a8129c0f1801c8fc2ce6872234efbea8558d413d63d913e405b9aa6fd506560ca79bc6a63445577428a562353e3903a197 May 17 00:06:44.522066 unknown[779]: fetched base config from "system" May 17 00:06:44.522479 ignition[779]: fetch: fetch complete May 17 00:06:44.522077 unknown[779]: fetched base config from "system" May 17 00:06:44.522484 ignition[779]: fetch: fetch passed May 17 00:06:44.522084 unknown[779]: fetched user config from "hetzner" May 17 00:06:44.522524 ignition[779]: Ignition finished successfully May 17 00:06:44.528523 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 17 00:06:44.534611 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:06:44.550148 ignition[786]: Ignition 2.19.0 May 17 00:06:44.550159 ignition[786]: Stage: kargs May 17 00:06:44.550358 ignition[786]: no configs at "/usr/lib/ignition/base.d" May 17 00:06:44.552700 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:06:44.550368 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:06:44.551363 ignition[786]: kargs: kargs passed May 17 00:06:44.551422 ignition[786]: Ignition finished successfully May 17 00:06:44.559625 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:06:44.572303 ignition[792]: Ignition 2.19.0 May 17 00:06:44.572313 ignition[792]: Stage: disks May 17 00:06:44.572552 ignition[792]: no configs at "/usr/lib/ignition/base.d" May 17 00:06:44.572562 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:06:44.573551 ignition[792]: disks: disks passed May 17 00:06:44.574964 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:06:44.573605 ignition[792]: Ignition finished successfully May 17 00:06:44.576341 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:06:44.577550 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:06:44.578526 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:06:44.579510 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:06:44.580572 systemd[1]: Reached target basic.target - Basic System. May 17 00:06:44.593400 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:06:44.612594 systemd-fsck[800]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 17 00:06:44.615586 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:06:44.621781 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:06:44.672462 kernel: EXT4-fs (sda9): mounted filesystem 50a777b7-c00f-4923-84ce-1c186fc0fd3b r/w with ordered data mode. Quota mode: none. May 17 00:06:44.673388 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:06:44.674427 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:06:44.686726 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:06:44.692694 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:06:44.696236 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 17 00:06:44.699152 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:06:44.705413 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (808) May 17 00:06:44.699211 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:06:44.708482 kernel: BTRFS info (device sda6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:06:44.708526 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:06:44.708539 kernel: BTRFS info (device sda6): using free space tree May 17 00:06:44.707686 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:06:44.709953 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:06:44.717133 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:06:44.717193 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:06:44.719734 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:06:44.763053 coreos-metadata[810]: May 17 00:06:44.762 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 May 17 00:06:44.765625 coreos-metadata[810]: May 17 00:06:44.765 INFO Fetch successful May 17 00:06:44.767594 coreos-metadata[810]: May 17 00:06:44.766 INFO wrote hostname ci-4081-3-3-n-3b0dbcbd78 to /sysroot/etc/hostname May 17 00:06:44.768448 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:06:44.770607 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:06:44.775626 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory May 17 00:06:44.781164 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:06:44.786803 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:06:44.897593 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:06:44.907642 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:06:44.912716 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:06:44.921507 kernel: BTRFS info (device sda6): last unmount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:06:44.946873 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:06:44.950030 ignition[927]: INFO : Ignition 2.19.0 May 17 00:06:44.950030 ignition[927]: INFO : Stage: mount May 17 00:06:44.951184 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:06:44.951184 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:06:44.953359 ignition[927]: INFO : mount: mount passed May 17 00:06:44.953359 ignition[927]: INFO : Ignition finished successfully May 17 00:06:44.954229 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:06:44.958688 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:06:45.086670 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:06:45.097912 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:06:45.107484 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (939) May 17 00:06:45.108741 kernel: BTRFS info (device sda6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:06:45.108788 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:06:45.108812 kernel: BTRFS info (device sda6): using free space tree May 17 00:06:45.111670 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:06:45.111735 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:06:45.115105 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:06:45.137919 ignition[956]: INFO : Ignition 2.19.0 May 17 00:06:45.137919 ignition[956]: INFO : Stage: files May 17 00:06:45.139059 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:06:45.139059 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:06:45.141101 ignition[956]: DEBUG : files: compiled without relabeling support, skipping May 17 00:06:45.141101 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:06:45.141101 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:06:45.143977 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:06:45.145142 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:06:45.145142 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:06:45.144419 unknown[956]: wrote ssh authorized keys file for user: core May 17 00:06:45.147341 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" May 17 00:06:45.147341 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 May 17 00:06:45.230371 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:06:45.828952 systemd-networkd[771]: eth1: Gained IPv6LL May 17 00:06:46.009262 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" May 17 00:06:46.010705 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 17 00:06:46.010705 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:06:46.010705 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:06:46.010705 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:06:46.010705 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:06:46.010705 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:06:46.010705 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:06:46.010705 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:06:46.010705 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:06:46.010705 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:06:46.010705 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 17 00:06:46.010705 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 17 00:06:46.010705 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 17 00:06:46.010705 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 May 17 00:06:46.021070 systemd-networkd[771]: eth0: Gained IPv6LL May 17 00:06:46.754884 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 17 00:06:47.791744 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 17 00:06:47.791744 ignition[956]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 17 00:06:47.795353 ignition[956]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:06:47.795353 ignition[956]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:06:47.795353 ignition[956]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 17 00:06:47.795353 ignition[956]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 17 00:06:47.795353 ignition[956]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 17 00:06:47.795353 ignition[956]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 17 00:06:47.795353 ignition[956]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 17 00:06:47.795353 ignition[956]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" May 17 00:06:47.795353 ignition[956]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:06:47.795353 ignition[956]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:06:47.795353 ignition[956]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:06:47.795353 ignition[956]: INFO : files: files passed May 17 00:06:47.795353 ignition[956]: INFO : Ignition finished successfully May 17 00:06:47.798886 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:06:47.809346 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:06:47.811804 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:06:47.813977 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:06:47.814656 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:06:47.825788 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:06:47.825788 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:06:47.828961 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:06:47.830850 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:06:47.831832 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:06:47.837747 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:06:47.871619 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:06:47.872625 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:06:47.874099 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:06:47.875032 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:06:47.876084 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:06:47.881678 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:06:47.896176 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:06:47.903793 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:06:47.914919 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:06:47.915717 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:06:47.917537 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:06:47.919363 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:06:47.919507 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:06:47.920948 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:06:47.921608 systemd[1]: Stopped target basic.target - Basic System. May 17 00:06:47.922649 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:06:47.923661 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:06:47.924641 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:06:47.925731 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:06:47.926786 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:06:47.927989 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:06:47.929033 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:06:47.930102 systemd[1]: Stopped target swap.target - Swaps. May 17 00:06:47.930950 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:06:47.931094 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:06:47.932328 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:06:47.933015 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:06:47.934066 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:06:47.934537 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:06:47.935308 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:06:47.935428 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:06:47.936909 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:06:47.937067 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:06:47.938171 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:06:47.938261 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:06:47.939429 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:06:47.939553 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:06:47.949833 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:06:47.955750 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:06:47.956512 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:06:47.956687 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:06:47.961258 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:06:47.961371 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:06:47.967774 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:06:47.969503 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:06:47.977937 ignition[1008]: INFO : Ignition 2.19.0 May 17 00:06:47.977937 ignition[1008]: INFO : Stage: umount May 17 00:06:47.977937 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:06:47.977937 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:06:47.977937 ignition[1008]: INFO : umount: umount passed May 17 00:06:47.982264 ignition[1008]: INFO : Ignition finished successfully May 17 00:06:47.979846 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:06:47.980714 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:06:47.982501 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:06:47.984750 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:06:47.984832 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:06:47.985554 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:06:47.985597 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:06:47.986701 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:06:47.986740 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 17 00:06:47.987607 systemd[1]: Stopped target network.target - Network. May 17 00:06:47.988404 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:06:47.988485 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:06:47.990141 systemd[1]: Stopped target paths.target - Path Units. May 17 00:06:47.990858 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:06:47.992631 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:06:47.993615 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:06:47.994409 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:06:47.995384 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:06:47.995428 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:06:47.996555 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:06:47.996593 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:06:47.997374 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:06:47.997426 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:06:47.998332 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:06:47.998374 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:06:47.999344 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:06:48.000096 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:06:48.003923 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:06:48.004028 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:06:48.005172 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:06:48.005260 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:06:48.005489 systemd-networkd[771]: eth1: DHCPv6 lease lost May 17 00:06:48.007789 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:06:48.007893 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:06:48.010224 systemd-networkd[771]: eth0: DHCPv6 lease lost May 17 00:06:48.010781 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:06:48.010840 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:06:48.012304 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:06:48.012402 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:06:48.015171 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:06:48.015235 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:06:48.021675 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:06:48.022160 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:06:48.022227 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:06:48.023263 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:06:48.023306 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:06:48.025284 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:06:48.025329 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:06:48.026115 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:06:48.039191 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:06:48.039306 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:06:48.050419 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:06:48.050786 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:06:48.053429 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:06:48.053523 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:06:48.054613 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:06:48.054653 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:06:48.056319 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:06:48.056380 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:06:48.059621 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:06:48.059679 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:06:48.061358 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:06:48.061402 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:06:48.068665 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:06:48.069222 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:06:48.069279 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:06:48.071134 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 17 00:06:48.071179 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:06:48.072950 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:06:48.072993 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:06:48.073660 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:06:48.073702 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:06:48.080257 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:06:48.080378 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:06:48.081913 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:06:48.083552 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:06:48.105674 systemd[1]: Switching root. May 17 00:06:48.138396 systemd-journald[236]: Journal stopped May 17 00:06:49.107675 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). May 17 00:06:49.107742 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:06:49.107759 kernel: SELinux: policy capability open_perms=1 May 17 00:06:49.107772 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:06:49.107786 kernel: SELinux: policy capability always_check_network=0 May 17 00:06:49.107796 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:06:49.107809 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:06:49.107818 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:06:49.107832 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:06:49.107842 kernel: audit: type=1403 audit(1747440408.301:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:06:49.107852 systemd[1]: Successfully loaded SELinux policy in 38.166ms. May 17 00:06:49.107869 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.214ms. May 17 00:06:49.107880 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:06:49.107891 systemd[1]: Detected virtualization kvm. May 17 00:06:49.107902 systemd[1]: Detected architecture arm64. May 17 00:06:49.107912 systemd[1]: Detected first boot. May 17 00:06:49.107927 systemd[1]: Hostname set to . May 17 00:06:49.107938 systemd[1]: Initializing machine ID from VM UUID. May 17 00:06:49.107949 zram_generator::config[1052]: No configuration found. May 17 00:06:49.107960 systemd[1]: Populated /etc with preset unit settings. May 17 00:06:49.107971 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:06:49.107982 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 17 00:06:49.108002 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:06:49.108016 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 00:06:49.108030 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 00:06:49.108041 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 00:06:49.108051 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 00:06:49.108061 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 00:06:49.108072 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 00:06:49.108082 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 00:06:49.108093 systemd[1]: Created slice user.slice - User and Session Slice. May 17 00:06:49.108103 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:06:49.108114 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:06:49.108126 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 00:06:49.108137 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 00:06:49.108147 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 00:06:49.108159 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:06:49.108170 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 17 00:06:49.108180 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:06:49.108190 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 17 00:06:49.108203 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 17 00:06:49.108214 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 17 00:06:49.108224 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 00:06:49.108235 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:06:49.108247 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:06:49.108259 systemd[1]: Reached target slices.target - Slice Units. May 17 00:06:49.108270 systemd[1]: Reached target swap.target - Swaps. May 17 00:06:49.108282 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 00:06:49.108295 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 00:06:49.108306 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:06:49.108318 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:06:49.108333 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:06:49.108344 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 00:06:49.108355 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 00:06:49.108366 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 00:06:49.108376 systemd[1]: Mounting media.mount - External Media Directory... May 17 00:06:49.108387 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 00:06:49.108400 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 00:06:49.108412 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 00:06:49.108425 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:06:49.108465 systemd[1]: Reached target machines.target - Containers. May 17 00:06:49.108483 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 00:06:49.108497 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:06:49.108509 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:06:49.108521 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 00:06:49.108534 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:06:49.108546 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:06:49.108557 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:06:49.108569 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 00:06:49.108580 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:06:49.108593 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:06:49.108606 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:06:49.108618 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 17 00:06:49.108629 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:06:49.108641 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:06:49.108652 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:06:49.108663 kernel: loop: module loaded May 17 00:06:49.108674 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:06:49.108688 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 00:06:49.108700 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 00:06:49.108714 kernel: ACPI: bus type drm_connector registered May 17 00:06:49.108724 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:06:49.108735 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:06:49.108746 systemd[1]: Stopped verity-setup.service. May 17 00:06:49.108758 kernel: fuse: init (API version 7.39) May 17 00:06:49.108771 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 00:06:49.108783 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 00:06:49.108795 systemd[1]: Mounted media.mount - External Media Directory. May 17 00:06:49.108807 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 00:06:49.108818 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 00:06:49.108830 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 00:06:49.108843 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:06:49.108855 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:06:49.108871 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 00:06:49.108884 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:06:49.108897 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:06:49.108909 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:06:49.108922 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:06:49.108936 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:06:49.108949 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:06:49.108962 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:06:49.108973 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 00:06:49.109016 systemd-journald[1122]: Collecting audit messages is disabled. May 17 00:06:49.109042 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:06:49.109054 systemd-journald[1122]: Journal started May 17 00:06:49.109079 systemd-journald[1122]: Runtime Journal (/run/log/journal/4d496235578948428fdd418f1d60d61e) is 8.0M, max 76.6M, 68.6M free. May 17 00:06:48.816912 systemd[1]: Queued start job for default target multi-user.target. May 17 00:06:48.839141 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 17 00:06:48.839810 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:06:49.110630 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:06:49.112853 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:06:49.115476 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:06:49.116617 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 00:06:49.117895 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 00:06:49.130422 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 00:06:49.137854 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 00:06:49.145677 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 00:06:49.151683 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 00:06:49.154557 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:06:49.154599 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:06:49.157700 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 00:06:49.161652 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 00:06:49.167693 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 00:06:49.170788 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:06:49.172607 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 00:06:49.176859 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 00:06:49.177625 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:06:49.182802 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 00:06:49.183505 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:06:49.187286 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:06:49.190825 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 00:06:49.193162 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:06:49.198510 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:06:49.200096 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 00:06:49.201389 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 00:06:49.202815 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 00:06:49.205779 systemd-journald[1122]: Time spent on flushing to /var/log/journal/4d496235578948428fdd418f1d60d61e is 82.200ms for 1129 entries. May 17 00:06:49.205779 systemd-journald[1122]: System Journal (/var/log/journal/4d496235578948428fdd418f1d60d61e) is 8.0M, max 584.8M, 576.8M free. May 17 00:06:49.316196 systemd-journald[1122]: Received client request to flush runtime journal. May 17 00:06:49.316260 kernel: loop0: detected capacity change from 0 to 8 May 17 00:06:49.316288 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:06:49.316302 kernel: loop1: detected capacity change from 0 to 211168 May 17 00:06:49.217603 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 00:06:49.259847 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 00:06:49.260782 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 00:06:49.275716 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 00:06:49.276699 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:06:49.287202 udevadm[1172]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:06:49.291773 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. May 17 00:06:49.291783 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. May 17 00:06:49.303464 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:06:49.312680 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 00:06:49.319772 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 00:06:49.331475 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:06:49.333490 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 00:06:49.353082 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 00:06:49.357508 kernel: loop2: detected capacity change from 0 to 114328 May 17 00:06:49.358033 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:06:49.387682 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. May 17 00:06:49.387701 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. May 17 00:06:49.391960 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:06:49.401638 kernel: loop3: detected capacity change from 0 to 114432 May 17 00:06:49.436393 kernel: loop4: detected capacity change from 0 to 8 May 17 00:06:49.441466 kernel: loop5: detected capacity change from 0 to 211168 May 17 00:06:49.466476 kernel: loop6: detected capacity change from 0 to 114328 May 17 00:06:49.487472 kernel: loop7: detected capacity change from 0 to 114432 May 17 00:06:49.505308 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. May 17 00:06:49.506483 (sd-merge)[1194]: Merged extensions into '/usr'. May 17 00:06:49.510973 systemd[1]: Reloading requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... May 17 00:06:49.511026 systemd[1]: Reloading... May 17 00:06:49.613161 zram_generator::config[1218]: No configuration found. May 17 00:06:49.690475 ldconfig[1161]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:06:49.764806 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:06:49.822923 systemd[1]: Reloading finished in 311 ms. May 17 00:06:49.847534 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 00:06:49.848859 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 00:06:49.859751 systemd[1]: Starting ensure-sysext.service... May 17 00:06:49.862897 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:06:49.871365 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... May 17 00:06:49.871525 systemd[1]: Reloading... May 17 00:06:49.912078 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:06:49.912364 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 00:06:49.913088 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:06:49.913318 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. May 17 00:06:49.913365 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. May 17 00:06:49.921951 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:06:49.921968 systemd-tmpfiles[1259]: Skipping /boot May 17 00:06:49.935184 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:06:49.935200 systemd-tmpfiles[1259]: Skipping /boot May 17 00:06:49.964472 zram_generator::config[1285]: No configuration found. May 17 00:06:50.096268 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:06:50.153702 systemd[1]: Reloading finished in 281 ms. May 17 00:06:50.174380 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 00:06:50.175735 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:06:50.191651 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:06:50.199265 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 00:06:50.204350 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 00:06:50.210357 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:06:50.219101 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:06:50.228756 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 00:06:50.233918 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:06:50.239044 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:06:50.244423 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:06:50.249241 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:06:50.250456 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:06:50.259752 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 00:06:50.261217 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 00:06:50.274208 systemd-udevd[1330]: Using default interface naming scheme 'v255'. May 17 00:06:50.277503 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 00:06:50.282133 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 00:06:50.286782 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:06:50.287057 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:06:50.291916 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:06:50.295128 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:06:50.296143 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:06:50.302087 systemd[1]: Finished ensure-sysext.service. May 17 00:06:50.304822 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:06:50.304956 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:06:50.308084 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 00:06:50.312901 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:06:50.320125 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 17 00:06:50.321086 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:06:50.328032 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:06:50.329833 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:06:50.330026 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:06:50.331047 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:06:50.331187 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:06:50.332048 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:06:50.351882 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:06:50.352119 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:06:50.365513 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 00:06:50.366908 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:06:50.375540 augenrules[1374]: No rules May 17 00:06:50.376870 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:06:50.389547 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 00:06:50.446618 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 17 00:06:50.482883 systemd-networkd[1361]: lo: Link UP May 17 00:06:50.483790 systemd-networkd[1361]: lo: Gained carrier May 17 00:06:50.485366 systemd-networkd[1361]: Enumeration completed May 17 00:06:50.486227 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:06:50.486382 systemd-networkd[1361]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:06:50.486449 systemd-networkd[1361]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:06:50.487488 systemd-networkd[1361]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:06:50.487571 systemd-networkd[1361]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:06:50.488375 systemd-networkd[1361]: eth0: Link UP May 17 00:06:50.488947 systemd-networkd[1361]: eth0: Gained carrier May 17 00:06:50.489089 systemd-networkd[1361]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:06:50.499857 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 00:06:50.503266 systemd-networkd[1361]: eth1: Link UP May 17 00:06:50.503394 systemd-networkd[1361]: eth1: Gained carrier May 17 00:06:50.503483 systemd-networkd[1361]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:06:50.521448 systemd-networkd[1361]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:06:50.535671 systemd-networkd[1361]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:06:50.552696 systemd-networkd[1361]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:06:50.562868 systemd-networkd[1361]: eth0: DHCPv4 address 168.119.99.67/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 17 00:06:50.577455 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 17 00:06:50.578229 systemd[1]: Reached target time-set.target - System Time Set. May 17 00:06:50.586218 systemd-resolved[1328]: Positive Trust Anchors: May 17 00:06:50.586240 systemd-resolved[1328]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:06:50.586273 systemd-resolved[1328]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:06:50.591479 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 43 scanned by (udev-worker) (1382) May 17 00:06:50.596269 systemd-resolved[1328]: Using system hostname 'ci-4081-3-3-n-3b0dbcbd78'. May 17 00:06:50.600301 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:06:50.602683 systemd[1]: Reached target network.target - Network. May 17 00:06:50.603628 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:06:50.657464 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:06:50.672674 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. May 17 00:06:50.672822 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:06:50.682805 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:06:50.692453 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:06:50.696142 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:06:50.697700 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:06:50.697733 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:06:50.698104 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:06:50.698678 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:06:50.702717 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:06:50.703463 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:06:50.706917 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:06:50.708934 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:06:50.709897 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:06:50.710749 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:06:50.734103 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:06:50.739951 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 17 00:06:50.754362 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 00:06:50.762086 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 May 17 00:06:50.762287 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 17 00:06:50.762324 kernel: [drm] features: -context_init May 17 00:06:50.762336 kernel: [drm] number of scanouts: 1 May 17 00:06:50.762350 kernel: [drm] number of cap sets: 0 May 17 00:06:50.765462 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 May 17 00:06:50.773966 kernel: Console: switching to colour frame buffer device 160x50 May 17 00:06:50.778475 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 17 00:06:50.790929 systemd-timesyncd[1356]: Contacted time server 178.215.228.24:123 (0.flatcar.pool.ntp.org). May 17 00:06:50.791006 systemd-timesyncd[1356]: Initial clock synchronization to Sat 2025-05-17 00:06:50.924504 UTC. May 17 00:06:50.791949 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 00:06:50.797938 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:06:50.798599 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:06:50.810793 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:06:50.866632 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:06:50.912701 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 00:06:50.920789 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 00:06:50.935453 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:06:50.962270 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 00:06:50.964761 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:06:50.966675 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:06:50.967819 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 00:06:50.968756 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 00:06:50.969789 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 00:06:50.970581 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 00:06:50.971407 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 00:06:50.972084 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:06:50.972124 systemd[1]: Reached target paths.target - Path Units. May 17 00:06:50.973214 systemd[1]: Reached target timers.target - Timer Units. May 17 00:06:50.975071 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 00:06:50.977291 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 00:06:50.985899 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 00:06:50.987955 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 00:06:50.989200 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 00:06:50.989886 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:06:50.990418 systemd[1]: Reached target basic.target - Basic System. May 17 00:06:50.990948 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 00:06:50.991020 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 00:06:50.993585 systemd[1]: Starting containerd.service - containerd container runtime... May 17 00:06:50.998102 lvm[1444]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:06:51.003115 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 17 00:06:51.008649 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 00:06:51.012638 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 00:06:51.015668 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 00:06:51.016214 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 00:06:51.027644 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 00:06:51.031544 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 00:06:51.035176 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. May 17 00:06:51.038825 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 00:06:51.042783 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 00:06:51.046118 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 00:06:51.047419 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:06:51.047900 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:06:51.048593 systemd[1]: Starting update-engine.service - Update Engine... May 17 00:06:51.050618 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 00:06:51.053227 jq[1450]: false May 17 00:06:51.053498 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 00:06:51.066785 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:06:51.068518 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 00:06:51.088372 coreos-metadata[1446]: May 17 00:06:51.088 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 May 17 00:06:51.093728 coreos-metadata[1446]: May 17 00:06:51.089 INFO Fetch successful May 17 00:06:51.093728 coreos-metadata[1446]: May 17 00:06:51.089 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 May 17 00:06:51.109624 coreos-metadata[1446]: May 17 00:06:51.104 INFO Fetch successful May 17 00:06:51.095599 dbus-daemon[1447]: [system] SELinux support is enabled May 17 00:06:51.095786 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 00:06:51.098385 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:06:51.098413 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 00:06:51.099190 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:06:51.099205 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 00:06:51.126265 extend-filesystems[1451]: Found loop4 May 17 00:06:51.126265 extend-filesystems[1451]: Found loop5 May 17 00:06:51.126265 extend-filesystems[1451]: Found loop6 May 17 00:06:51.126265 extend-filesystems[1451]: Found loop7 May 17 00:06:51.126265 extend-filesystems[1451]: Found sda May 17 00:06:51.126265 extend-filesystems[1451]: Found sda1 May 17 00:06:51.126265 extend-filesystems[1451]: Found sda2 May 17 00:06:51.126265 extend-filesystems[1451]: Found sda3 May 17 00:06:51.126265 extend-filesystems[1451]: Found usr May 17 00:06:51.126265 extend-filesystems[1451]: Found sda4 May 17 00:06:51.126265 extend-filesystems[1451]: Found sda6 May 17 00:06:51.126265 extend-filesystems[1451]: Found sda7 May 17 00:06:51.126265 extend-filesystems[1451]: Found sda9 May 17 00:06:51.126265 extend-filesystems[1451]: Checking size of /dev/sda9 May 17 00:06:51.127301 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:06:51.188138 tar[1468]: linux-arm64/LICENSE May 17 00:06:51.188138 tar[1468]: linux-arm64/helm May 17 00:06:51.188387 jq[1460]: true May 17 00:06:51.188504 update_engine[1459]: I20250517 00:06:51.128834 1459 main.cc:92] Flatcar Update Engine starting May 17 00:06:51.188504 update_engine[1459]: I20250517 00:06:51.147655 1459 update_check_scheduler.cc:74] Next update check in 7m56s May 17 00:06:51.127489 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 00:06:51.130325 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:06:51.197097 extend-filesystems[1451]: Resized partition /dev/sda9 May 17 00:06:51.132044 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 00:06:51.197847 jq[1483]: true May 17 00:06:51.213493 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks May 17 00:06:51.145810 systemd[1]: Started update-engine.service - Update Engine. May 17 00:06:51.214486 extend-filesystems[1497]: resize2fs 1.47.1 (20-May-2024) May 17 00:06:51.155658 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 00:06:51.162877 (ntainerd)[1478]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 00:06:51.209377 systemd-logind[1457]: New seat seat0. May 17 00:06:51.217750 systemd-logind[1457]: Watching system buttons on /dev/input/event0 (Power Button) May 17 00:06:51.218165 systemd-logind[1457]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) May 17 00:06:51.218594 systemd[1]: Started systemd-logind.service - User Login Management. May 17 00:06:51.225629 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 17 00:06:51.231724 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 17 00:06:51.318534 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 43 scanned by (udev-worker) (1386) May 17 00:06:51.318981 bash[1516]: Updated "/home/core/.ssh/authorized_keys" May 17 00:06:51.325050 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 00:06:51.342849 systemd[1]: Starting sshkeys.service... May 17 00:06:51.369486 kernel: EXT4-fs (sda9): resized filesystem to 9393147 May 17 00:06:51.376127 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 17 00:06:51.390753 extend-filesystems[1497]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 17 00:06:51.390753 extend-filesystems[1497]: old_desc_blocks = 1, new_desc_blocks = 5 May 17 00:06:51.390753 extend-filesystems[1497]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. May 17 00:06:51.394522 extend-filesystems[1451]: Resized filesystem in /dev/sda9 May 17 00:06:51.394522 extend-filesystems[1451]: Found sr0 May 17 00:06:51.414281 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 17 00:06:51.416759 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:06:51.416969 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 00:06:51.478271 containerd[1478]: time="2025-05-17T00:06:51.478183042Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 00:06:51.490554 coreos-metadata[1526]: May 17 00:06:51.490 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 May 17 00:06:51.495474 coreos-metadata[1526]: May 17 00:06:51.494 INFO Fetch successful May 17 00:06:51.498576 unknown[1526]: wrote ssh authorized keys file for user: core May 17 00:06:51.514221 containerd[1478]: time="2025-05-17T00:06:51.514167413Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:06:51.518505 containerd[1478]: time="2025-05-17T00:06:51.518430558Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:06:51.518618 containerd[1478]: time="2025-05-17T00:06:51.518601325Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:06:51.518693 containerd[1478]: time="2025-05-17T00:06:51.518678148Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:06:51.518906 containerd[1478]: time="2025-05-17T00:06:51.518886126Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 00:06:51.519246 containerd[1478]: time="2025-05-17T00:06:51.519224893Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 00:06:51.520087 containerd[1478]: time="2025-05-17T00:06:51.520060751Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:06:51.520182 containerd[1478]: time="2025-05-17T00:06:51.520166855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:06:51.520421 containerd[1478]: time="2025-05-17T00:06:51.520398665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:06:51.521529 containerd[1478]: time="2025-05-17T00:06:51.520824951Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:06:51.521529 containerd[1478]: time="2025-05-17T00:06:51.520852239Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:06:51.521529 containerd[1478]: time="2025-05-17T00:06:51.520872573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:06:51.521529 containerd[1478]: time="2025-05-17T00:06:51.520967941Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:06:51.521529 containerd[1478]: time="2025-05-17T00:06:51.521161888Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:06:51.521529 containerd[1478]: time="2025-05-17T00:06:51.521264617Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:06:51.521529 containerd[1478]: time="2025-05-17T00:06:51.521279217Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:06:51.521529 containerd[1478]: time="2025-05-17T00:06:51.521370070Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:06:51.521529 containerd[1478]: time="2025-05-17T00:06:51.521418140Z" level=info msg="metadata content store policy set" policy=shared May 17 00:06:51.527854 containerd[1478]: time="2025-05-17T00:06:51.527791040Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:06:51.527970 containerd[1478]: time="2025-05-17T00:06:51.527954079Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:06:51.528162 containerd[1478]: time="2025-05-17T00:06:51.528145180Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 00:06:51.528316 containerd[1478]: time="2025-05-17T00:06:51.528221433Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 00:06:51.528316 containerd[1478]: time="2025-05-17T00:06:51.528252626Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:06:51.528983 update-ssh-keys[1536]: Updated "/home/core/.ssh/authorized_keys" May 17 00:06:51.530380 containerd[1478]: time="2025-05-17T00:06:51.529343434Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:06:51.530380 containerd[1478]: time="2025-05-17T00:06:51.529952728Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:06:51.530380 containerd[1478]: time="2025-05-17T00:06:51.530085429Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 00:06:51.530380 containerd[1478]: time="2025-05-17T00:06:51.530103119Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 00:06:51.531438 containerd[1478]: time="2025-05-17T00:06:51.530118207Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 00:06:51.531438 containerd[1478]: time="2025-05-17T00:06:51.530500328Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:06:51.531438 containerd[1478]: time="2025-05-17T00:06:51.530517490Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:06:51.531438 containerd[1478]: time="2025-05-17T00:06:51.530530585Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:06:51.531438 containerd[1478]: time="2025-05-17T00:06:51.530545713Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:06:51.531438 containerd[1478]: time="2025-05-17T00:06:51.530560191Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:06:51.531438 containerd[1478]: time="2025-05-17T00:06:51.530574059Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:06:51.531438 containerd[1478]: time="2025-05-17T00:06:51.530586667Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:06:51.531438 containerd[1478]: time="2025-05-17T00:06:51.530610783Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:06:51.531438 containerd[1478]: time="2025-05-17T00:06:51.530638153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:06:51.531438 containerd[1478]: time="2025-05-17T00:06:51.530653769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:06:51.531438 containerd[1478]: time="2025-05-17T00:06:51.530666336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:06:51.531438 containerd[1478]: time="2025-05-17T00:06:51.530679187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:06:51.531438 containerd[1478]: time="2025-05-17T00:06:51.530708509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:06:51.530994 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 17 00:06:51.531778 containerd[1478]: time="2025-05-17T00:06:51.530724329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:06:51.531778 containerd[1478]: time="2025-05-17T00:06:51.530735920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:06:51.531778 containerd[1478]: time="2025-05-17T00:06:51.530750601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:06:51.531778 containerd[1478]: time="2025-05-17T00:06:51.530849222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 00:06:51.531778 containerd[1478]: time="2025-05-17T00:06:51.530865448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 00:06:51.531778 containerd[1478]: time="2025-05-17T00:06:51.530894526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:06:51.531778 containerd[1478]: time="2025-05-17T00:06:51.530907499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 00:06:51.531778 containerd[1478]: time="2025-05-17T00:06:51.530920757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:06:51.531778 containerd[1478]: time="2025-05-17T00:06:51.530937716Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 00:06:51.531778 containerd[1478]: time="2025-05-17T00:06:51.530961466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 00:06:51.531778 containerd[1478]: time="2025-05-17T00:06:51.530973911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:06:51.531778 containerd[1478]: time="2025-05-17T00:06:51.530984973Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:06:51.533634 containerd[1478]: time="2025-05-17T00:06:51.533506413Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:06:51.535173 containerd[1478]: time="2025-05-17T00:06:51.534073005Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 00:06:51.535173 containerd[1478]: time="2025-05-17T00:06:51.534095983Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:06:51.535173 containerd[1478]: time="2025-05-17T00:06:51.534108834Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 00:06:51.535173 containerd[1478]: time="2025-05-17T00:06:51.534118432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:06:51.535173 containerd[1478]: time="2025-05-17T00:06:51.534134374Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 00:06:51.535173 containerd[1478]: time="2025-05-17T00:06:51.534144948Z" level=info msg="NRI interface is disabled by configuration." May 17 00:06:51.535173 containerd[1478]: time="2025-05-17T00:06:51.534154911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:06:51.535201 systemd[1]: Finished sshkeys.service. May 17 00:06:51.536135 containerd[1478]: time="2025-05-17T00:06:51.535426002Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:06:51.536135 containerd[1478]: time="2025-05-17T00:06:51.535540524Z" level=info msg="Connect containerd service" May 17 00:06:51.536135 containerd[1478]: time="2025-05-17T00:06:51.535582535Z" level=info msg="using legacy CRI server" May 17 00:06:51.536135 containerd[1478]: time="2025-05-17T00:06:51.535589774Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 00:06:51.536135 containerd[1478]: time="2025-05-17T00:06:51.535703889Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:06:51.537696 containerd[1478]: time="2025-05-17T00:06:51.537419445Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:06:51.538386 containerd[1478]: time="2025-05-17T00:06:51.538352744Z" level=info msg="Start subscribing containerd event" May 17 00:06:51.538689 containerd[1478]: time="2025-05-17T00:06:51.538671950Z" level=info msg="Start recovering state" May 17 00:06:51.538775 containerd[1478]: time="2025-05-17T00:06:51.538743526Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:06:51.538805 containerd[1478]: time="2025-05-17T00:06:51.538797086Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:06:51.538945 containerd[1478]: time="2025-05-17T00:06:51.538929909Z" level=info msg="Start event monitor" May 17 00:06:51.542387 containerd[1478]: time="2025-05-17T00:06:51.542361468Z" level=info msg="Start snapshots syncer" May 17 00:06:51.542395 locksmithd[1488]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:06:51.543775 containerd[1478]: time="2025-05-17T00:06:51.543028836Z" level=info msg="Start cni network conf syncer for default" May 17 00:06:51.543775 containerd[1478]: time="2025-05-17T00:06:51.543050878Z" level=info msg="Start streaming server" May 17 00:06:51.543775 containerd[1478]: time="2025-05-17T00:06:51.543206801Z" level=info msg="containerd successfully booted in 0.065967s" May 17 00:06:51.543298 systemd[1]: Started containerd.service - containerd container runtime. May 17 00:06:51.652665 systemd-networkd[1361]: eth0: Gained IPv6LL May 17 00:06:51.657895 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 00:06:51.659173 systemd[1]: Reached target network-online.target - Network is Online. May 17 00:06:51.670710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:06:51.676758 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 00:06:51.725354 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 00:06:52.146744 tar[1468]: linux-arm64/README.md May 17 00:06:52.166231 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 00:06:52.356708 systemd-networkd[1361]: eth1: Gained IPv6LL May 17 00:06:52.511642 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:06:52.522389 (kubelet)[1561]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:06:53.040425 kubelet[1561]: E0517 00:06:53.040367 1561 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:06:53.042989 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:06:53.043922 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:06:53.475355 sshd_keygen[1491]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:06:53.497089 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 00:06:53.512311 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 00:06:53.522735 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:06:53.522987 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 00:06:53.531988 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 00:06:53.542427 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 00:06:53.548166 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 00:06:53.552392 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 17 00:06:53.553616 systemd[1]: Reached target getty.target - Login Prompts. May 17 00:06:53.554298 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 00:06:53.555770 systemd[1]: Startup finished in 782ms (kernel) + 6.612s (initrd) + 5.291s (userspace) = 12.686s. May 17 00:07:03.294146 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:07:03.302863 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:07:03.411137 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:07:03.422008 (kubelet)[1597]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:07:03.468340 kubelet[1597]: E0517 00:07:03.468259 1597 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:07:03.471669 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:07:03.471863 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:07:13.722629 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:07:13.732704 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:07:13.856851 (kubelet)[1612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:07:13.858045 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:07:13.894785 kubelet[1612]: E0517 00:07:13.894717 1612 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:07:13.898392 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:07:13.898614 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:07:24.148950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 17 00:07:24.155117 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:07:24.284933 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:07:24.286720 (kubelet)[1627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:07:24.335883 kubelet[1627]: E0517 00:07:24.335796 1627 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:07:24.338607 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:07:24.338771 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:07:34.505699 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 17 00:07:34.519831 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:07:34.652773 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:07:34.653004 (kubelet)[1642]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:07:34.703381 kubelet[1642]: E0517 00:07:34.703285 1642 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:07:34.706179 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:07:34.706404 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:07:36.871203 update_engine[1459]: I20250517 00:07:36.870527 1459 update_attempter.cc:509] Updating boot flags... May 17 00:07:36.919491 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 43 scanned by (udev-worker) (1658) May 17 00:07:36.973490 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 43 scanned by (udev-worker) (1659) May 17 00:07:37.037479 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 43 scanned by (udev-worker) (1659) May 17 00:07:44.756291 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 17 00:07:44.763820 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:07:44.918706 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:07:44.921598 (kubelet)[1678]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:07:44.967311 kubelet[1678]: E0517 00:07:44.967217 1678 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:07:44.969779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:07:44.969941 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:07:55.005973 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 17 00:07:55.014814 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:07:55.139675 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:07:55.144585 (kubelet)[1693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:07:55.186013 kubelet[1693]: E0517 00:07:55.185955 1693 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:07:55.188751 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:07:55.188937 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:08:05.256223 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. May 17 00:08:05.262753 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:08:05.403055 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:08:05.413987 (kubelet)[1708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:08:05.467507 kubelet[1708]: E0517 00:08:05.467396 1708 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:08:05.471000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:08:05.471175 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:08:15.505985 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. May 17 00:08:15.513930 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:08:15.652857 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:08:15.658240 (kubelet)[1723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:08:15.699607 kubelet[1723]: E0517 00:08:15.699409 1723 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:08:15.703411 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:08:15.703708 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:08:25.756360 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. May 17 00:08:25.766817 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:08:25.893279 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:08:25.908136 (kubelet)[1738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:08:25.956029 kubelet[1738]: E0517 00:08:25.955949 1738 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:08:25.960575 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:08:25.961039 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:08:31.037305 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 00:08:31.045850 systemd[1]: Started sshd@0-168.119.99.67:22-139.178.68.195:37720.service - OpenSSH per-connection server daemon (139.178.68.195:37720). May 17 00:08:32.024906 sshd[1745]: Accepted publickey for core from 139.178.68.195 port 37720 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:08:32.027882 sshd[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:08:32.039725 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 00:08:32.046959 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 00:08:32.052953 systemd-logind[1457]: New session 1 of user core. May 17 00:08:32.062760 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 00:08:32.072347 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 00:08:32.076502 (systemd)[1749]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:08:32.183616 systemd[1749]: Queued start job for default target default.target. May 17 00:08:32.194905 systemd[1749]: Created slice app.slice - User Application Slice. May 17 00:08:32.195106 systemd[1749]: Reached target paths.target - Paths. May 17 00:08:32.195201 systemd[1749]: Reached target timers.target - Timers. May 17 00:08:32.197005 systemd[1749]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 00:08:32.215933 systemd[1749]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 00:08:32.216168 systemd[1749]: Reached target sockets.target - Sockets. May 17 00:08:32.216198 systemd[1749]: Reached target basic.target - Basic System. May 17 00:08:32.216412 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 00:08:32.217001 systemd[1749]: Reached target default.target - Main User Target. May 17 00:08:32.217059 systemd[1749]: Startup finished in 134ms. May 17 00:08:32.226780 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 00:08:32.916634 systemd[1]: Started sshd@1-168.119.99.67:22-139.178.68.195:37734.service - OpenSSH per-connection server daemon (139.178.68.195:37734). May 17 00:08:33.923112 sshd[1760]: Accepted publickey for core from 139.178.68.195 port 37734 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:08:33.925560 sshd[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:08:33.930968 systemd-logind[1457]: New session 2 of user core. May 17 00:08:33.940707 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 00:08:34.610883 sshd[1760]: pam_unix(sshd:session): session closed for user core May 17 00:08:34.616593 systemd[1]: sshd@1-168.119.99.67:22-139.178.68.195:37734.service: Deactivated successfully. May 17 00:08:34.620118 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:08:34.621368 systemd-logind[1457]: Session 2 logged out. Waiting for processes to exit. May 17 00:08:34.622947 systemd-logind[1457]: Removed session 2. May 17 00:08:34.792972 systemd[1]: Started sshd@2-168.119.99.67:22-139.178.68.195:42228.service - OpenSSH per-connection server daemon (139.178.68.195:42228). May 17 00:08:35.784906 sshd[1767]: Accepted publickey for core from 139.178.68.195 port 42228 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:08:35.787105 sshd[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:08:35.791980 systemd-logind[1457]: New session 3 of user core. May 17 00:08:35.798752 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 00:08:36.005809 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. May 17 00:08:36.018862 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:08:36.140496 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:08:36.159064 (kubelet)[1777]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:08:36.204513 kubelet[1777]: E0517 00:08:36.204462 1777 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:08:36.207480 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:08:36.207827 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:08:36.472626 sshd[1767]: pam_unix(sshd:session): session closed for user core May 17 00:08:36.477294 systemd[1]: sshd@2-168.119.99.67:22-139.178.68.195:42228.service: Deactivated successfully. May 17 00:08:36.478966 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:08:36.480759 systemd-logind[1457]: Session 3 logged out. Waiting for processes to exit. May 17 00:08:36.482675 systemd-logind[1457]: Removed session 3. May 17 00:08:36.660072 systemd[1]: Started sshd@3-168.119.99.67:22-139.178.68.195:42240.service - OpenSSH per-connection server daemon (139.178.68.195:42240). May 17 00:08:37.659871 sshd[1789]: Accepted publickey for core from 139.178.68.195 port 42240 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:08:37.662116 sshd[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:08:37.667553 systemd-logind[1457]: New session 4 of user core. May 17 00:08:37.675742 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 00:08:38.355399 sshd[1789]: pam_unix(sshd:session): session closed for user core May 17 00:08:38.360046 systemd[1]: sshd@3-168.119.99.67:22-139.178.68.195:42240.service: Deactivated successfully. May 17 00:08:38.361929 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:08:38.364069 systemd-logind[1457]: Session 4 logged out. Waiting for processes to exit. May 17 00:08:38.365384 systemd-logind[1457]: Removed session 4. May 17 00:08:38.527901 systemd[1]: Started sshd@4-168.119.99.67:22-139.178.68.195:42244.service - OpenSSH per-connection server daemon (139.178.68.195:42244). May 17 00:08:39.505995 sshd[1796]: Accepted publickey for core from 139.178.68.195 port 42244 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:08:39.508492 sshd[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:08:39.516328 systemd-logind[1457]: New session 5 of user core. May 17 00:08:39.525807 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 00:08:40.038041 sudo[1799]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:08:40.038326 sudo[1799]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:08:40.054101 sudo[1799]: pam_unix(sudo:session): session closed for user root May 17 00:08:40.213526 sshd[1796]: pam_unix(sshd:session): session closed for user core May 17 00:08:40.219892 systemd[1]: sshd@4-168.119.99.67:22-139.178.68.195:42244.service: Deactivated successfully. May 17 00:08:40.222306 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:08:40.223169 systemd-logind[1457]: Session 5 logged out. Waiting for processes to exit. May 17 00:08:40.224312 systemd-logind[1457]: Removed session 5. May 17 00:08:40.396087 systemd[1]: Started sshd@5-168.119.99.67:22-139.178.68.195:42254.service - OpenSSH per-connection server daemon (139.178.68.195:42254). May 17 00:08:41.377393 sshd[1804]: Accepted publickey for core from 139.178.68.195 port 42254 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:08:41.379816 sshd[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:08:41.386688 systemd-logind[1457]: New session 6 of user core. May 17 00:08:41.392851 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 00:08:41.899750 sudo[1808]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:08:41.900019 sudo[1808]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:08:41.906352 sudo[1808]: pam_unix(sudo:session): session closed for user root May 17 00:08:41.912235 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:08:41.912538 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:08:41.927749 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 00:08:41.930471 auditctl[1811]: No rules May 17 00:08:41.930807 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:08:41.931022 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 00:08:41.938250 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:08:41.963252 augenrules[1829]: No rules May 17 00:08:41.964126 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:08:41.965343 sudo[1807]: pam_unix(sudo:session): session closed for user root May 17 00:08:42.125870 sshd[1804]: pam_unix(sshd:session): session closed for user core May 17 00:08:42.130150 systemd-logind[1457]: Session 6 logged out. Waiting for processes to exit. May 17 00:08:42.131162 systemd[1]: sshd@5-168.119.99.67:22-139.178.68.195:42254.service: Deactivated successfully. May 17 00:08:42.134074 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:08:42.136363 systemd-logind[1457]: Removed session 6. May 17 00:08:42.308101 systemd[1]: Started sshd@6-168.119.99.67:22-139.178.68.195:42262.service - OpenSSH per-connection server daemon (139.178.68.195:42262). May 17 00:08:43.299837 sshd[1837]: Accepted publickey for core from 139.178.68.195 port 42262 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:08:43.301793 sshd[1837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:08:43.307176 systemd-logind[1457]: New session 7 of user core. May 17 00:08:43.318838 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 00:08:43.827729 sudo[1840]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:08:43.828011 sudo[1840]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:08:44.119906 (dockerd)[1856]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 00:08:44.120059 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 00:08:44.362372 dockerd[1856]: time="2025-05-17T00:08:44.361949476Z" level=info msg="Starting up" May 17 00:08:44.434424 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1012018066-merged.mount: Deactivated successfully. May 17 00:08:44.451586 systemd[1]: var-lib-docker-metacopy\x2dcheck700641520-merged.mount: Deactivated successfully. May 17 00:08:44.460077 dockerd[1856]: time="2025-05-17T00:08:44.460028239Z" level=info msg="Loading containers: start." May 17 00:08:44.555494 kernel: Initializing XFRM netlink socket May 17 00:08:44.637306 systemd-networkd[1361]: docker0: Link UP May 17 00:08:44.661255 dockerd[1856]: time="2025-05-17T00:08:44.661195429Z" level=info msg="Loading containers: done." May 17 00:08:44.676501 dockerd[1856]: time="2025-05-17T00:08:44.676370386Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:08:44.677107 dockerd[1856]: time="2025-05-17T00:08:44.676752025Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 00:08:44.677107 dockerd[1856]: time="2025-05-17T00:08:44.676903025Z" level=info msg="Daemon has completed initialization" May 17 00:08:44.728373 dockerd[1856]: time="2025-05-17T00:08:44.727855481Z" level=info msg="API listen on /run/docker.sock" May 17 00:08:44.728706 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 00:08:45.470826 containerd[1478]: time="2025-05-17T00:08:45.470767030Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 17 00:08:46.106129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4223016109.mount: Deactivated successfully. May 17 00:08:46.255645 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. May 17 00:08:46.263818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:08:46.396844 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:08:46.409857 (kubelet)[2027]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:08:46.452367 kubelet[2027]: E0517 00:08:46.452290 2027 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:08:46.454287 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:08:46.454420 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:08:48.640421 containerd[1478]: time="2025-05-17T00:08:48.640352316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:48.641524 containerd[1478]: time="2025-05-17T00:08:48.641483756Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.1: active requests=0, bytes read=27349442" May 17 00:08:48.642386 containerd[1478]: time="2025-05-17T00:08:48.642337916Z" level=info msg="ImageCreate event name:\"sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:48.647256 containerd[1478]: time="2025-05-17T00:08:48.647182915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:48.649074 containerd[1478]: time="2025-05-17T00:08:48.648820555Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.1\" with image id \"sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\", size \"27346150\" in 3.178001765s" May 17 00:08:48.649074 containerd[1478]: time="2025-05-17T00:08:48.648863115Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a\"" May 17 00:08:48.650728 containerd[1478]: time="2025-05-17T00:08:48.650699555Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 17 00:08:51.403079 containerd[1478]: time="2025-05-17T00:08:51.402839842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:51.404657 containerd[1478]: time="2025-05-17T00:08:51.404617045Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.1: active requests=0, bytes read=23531755" May 17 00:08:51.405736 containerd[1478]: time="2025-05-17T00:08:51.405628847Z" level=info msg="ImageCreate event name:\"sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:51.409876 containerd[1478]: time="2025-05-17T00:08:51.409798134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:51.411254 containerd[1478]: time="2025-05-17T00:08:51.411111376Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.1\" with image id \"sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\", size \"25086427\" in 2.760379501s" May 17 00:08:51.411254 containerd[1478]: time="2025-05-17T00:08:51.411154336Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383\"" May 17 00:08:51.411804 containerd[1478]: time="2025-05-17T00:08:51.411746817Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 17 00:08:53.433476 containerd[1478]: time="2025-05-17T00:08:53.433343414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:53.434683 containerd[1478]: time="2025-05-17T00:08:53.434627938Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.1: active requests=0, bytes read=18293751" May 17 00:08:53.435752 containerd[1478]: time="2025-05-17T00:08:53.435699701Z" level=info msg="ImageCreate event name:\"sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:53.439322 containerd[1478]: time="2025-05-17T00:08:53.439258791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:53.440604 containerd[1478]: time="2025-05-17T00:08:53.440569714Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.1\" with image id \"sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\", size \"19848441\" in 2.028661617s" May 17 00:08:53.440831 containerd[1478]: time="2025-05-17T00:08:53.440692435Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e\"" May 17 00:08:53.441370 containerd[1478]: time="2025-05-17T00:08:53.441182956Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 17 00:08:54.738980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount21824404.mount: Deactivated successfully. May 17 00:08:55.094468 containerd[1478]: time="2025-05-17T00:08:55.094369969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:55.096470 containerd[1478]: time="2025-05-17T00:08:55.096405096Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.1: active requests=0, bytes read=28196030" May 17 00:08:55.098480 containerd[1478]: time="2025-05-17T00:08:55.098087943Z" level=info msg="ImageCreate event name:\"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:55.102862 containerd[1478]: time="2025-05-17T00:08:55.102806281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:55.104758 containerd[1478]: time="2025-05-17T00:08:55.104724608Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.1\" with image id \"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\", repo tag \"registry.k8s.io/kube-proxy:v1.33.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\", size \"28195023\" in 1.663513892s" May 17 00:08:55.105154 containerd[1478]: time="2025-05-17T00:08:55.104869409Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\"" May 17 00:08:55.105686 containerd[1478]: time="2025-05-17T00:08:55.105408011Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 17 00:08:55.691140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3039305118.mount: Deactivated successfully. May 17 00:08:56.505370 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. May 17 00:08:56.511881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:08:56.649587 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:08:56.665383 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:08:56.712292 kubelet[2138]: E0517 00:08:56.712228 2138 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:08:56.714870 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:08:56.715028 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:08:57.269863 containerd[1478]: time="2025-05-17T00:08:57.269812174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:57.271842 containerd[1478]: time="2025-05-17T00:08:57.271795063Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152209" May 17 00:08:57.273695 containerd[1478]: time="2025-05-17T00:08:57.273640272Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:57.277869 containerd[1478]: time="2025-05-17T00:08:57.277826772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:57.280253 containerd[1478]: time="2025-05-17T00:08:57.280205744Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.174739613s" May 17 00:08:57.280253 containerd[1478]: time="2025-05-17T00:08:57.280247744Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" May 17 00:08:57.280782 containerd[1478]: time="2025-05-17T00:08:57.280695946Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:08:57.833087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2396636793.mount: Deactivated successfully. May 17 00:08:57.842196 containerd[1478]: time="2025-05-17T00:08:57.841352053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:57.843201 containerd[1478]: time="2025-05-17T00:08:57.843168461Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" May 17 00:08:57.845620 containerd[1478]: time="2025-05-17T00:08:57.844411147Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:57.847143 containerd[1478]: time="2025-05-17T00:08:57.847108760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:57.848854 containerd[1478]: time="2025-05-17T00:08:57.848825609Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 568.097263ms" May 17 00:08:57.848966 containerd[1478]: time="2025-05-17T00:08:57.848950489Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 17 00:08:57.849597 containerd[1478]: time="2025-05-17T00:08:57.849574012Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 17 00:09:02.434874 containerd[1478]: time="2025-05-17T00:09:02.434623583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:02.438502 containerd[1478]: time="2025-05-17T00:09:02.437557323Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69230195" May 17 00:09:02.438502 containerd[1478]: time="2025-05-17T00:09:02.437889126Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:02.445564 containerd[1478]: time="2025-05-17T00:09:02.445499459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:02.447112 containerd[1478]: time="2025-05-17T00:09:02.447037150Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 4.597349977s" May 17 00:09:02.447112 containerd[1478]: time="2025-05-17T00:09:02.447095470Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" May 17 00:09:06.756384 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. May 17 00:09:06.769171 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:09:06.905749 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:09:06.906829 (kubelet)[2193]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:09:06.948338 kubelet[2193]: E0517 00:09:06.948292 2193 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:09:06.953132 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:09:06.953265 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:09:07.364023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:09:07.371878 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:09:07.409340 systemd[1]: Reloading requested from client PID 2207 ('systemctl') (unit session-7.scope)... May 17 00:09:07.409356 systemd[1]: Reloading... May 17 00:09:07.520546 zram_generator::config[2247]: No configuration found. May 17 00:09:07.638534 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:09:07.722699 systemd[1]: Reloading finished in 312 ms. May 17 00:09:07.769177 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 00:09:07.769260 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 00:09:07.769686 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:09:07.772783 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:09:07.913688 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:09:07.927039 (kubelet)[2295]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:09:07.972303 kubelet[2295]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:09:07.972751 kubelet[2295]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:09:07.972798 kubelet[2295]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:09:07.972953 kubelet[2295]: I0517 00:09:07.972919 2295 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:09:08.296658 kubelet[2295]: I0517 00:09:08.296608 2295 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 17 00:09:08.296658 kubelet[2295]: I0517 00:09:08.296644 2295 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:09:08.296931 kubelet[2295]: I0517 00:09:08.296886 2295 server.go:956] "Client rotation is on, will bootstrap in background" May 17 00:09:08.321165 kubelet[2295]: E0517 00:09:08.321103 2295 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://168.119.99.67:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 168.119.99.67:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 17 00:09:08.321692 kubelet[2295]: I0517 00:09:08.321647 2295 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:09:08.335324 kubelet[2295]: E0517 00:09:08.335100 2295 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:09:08.335324 kubelet[2295]: I0517 00:09:08.335148 2295 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:09:08.337943 kubelet[2295]: I0517 00:09:08.337900 2295 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:09:08.340192 kubelet[2295]: I0517 00:09:08.340122 2295 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:09:08.340457 kubelet[2295]: I0517 00:09:08.340189 2295 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-3b0dbcbd78","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:09:08.340556 kubelet[2295]: I0517 00:09:08.340541 2295 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:09:08.340585 kubelet[2295]: I0517 00:09:08.340559 2295 container_manager_linux.go:303] "Creating device plugin manager" May 17 00:09:08.340842 kubelet[2295]: I0517 00:09:08.340806 2295 state_mem.go:36] "Initialized new in-memory state store" May 17 00:09:08.344647 kubelet[2295]: I0517 00:09:08.344373 2295 kubelet.go:480] "Attempting to sync node with API server" May 17 00:09:08.344647 kubelet[2295]: I0517 00:09:08.344410 2295 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:09:08.344647 kubelet[2295]: I0517 00:09:08.344459 2295 kubelet.go:386] "Adding apiserver pod source" May 17 00:09:08.344647 kubelet[2295]: I0517 00:09:08.344477 2295 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:09:08.349204 kubelet[2295]: I0517 00:09:08.349024 2295 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:09:08.350070 kubelet[2295]: I0517 00:09:08.349878 2295 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 17 00:09:08.350070 kubelet[2295]: W0517 00:09:08.350011 2295 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:09:08.352838 kubelet[2295]: I0517 00:09:08.352811 2295 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:09:08.352926 kubelet[2295]: I0517 00:09:08.352858 2295 server.go:1289] "Started kubelet" May 17 00:09:08.353094 kubelet[2295]: E0517 00:09:08.353060 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://168.119.99.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-3b0dbcbd78&limit=500&resourceVersion=0\": dial tcp 168.119.99.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 17 00:09:08.355467 kubelet[2295]: E0517 00:09:08.355033 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://168.119.99.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 168.119.99.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 17 00:09:08.355467 kubelet[2295]: I0517 00:09:08.355127 2295 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:09:08.355813 kubelet[2295]: I0517 00:09:08.355794 2295 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:09:08.357418 kubelet[2295]: I0517 00:09:08.357395 2295 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:09:08.360192 kubelet[2295]: E0517 00:09:08.358901 2295 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://168.119.99.67:6443/api/v1/namespaces/default/events\": dial tcp 168.119.99.67:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-n-3b0dbcbd78.184027eb9eaf2d3b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-n-3b0dbcbd78,UID:ci-4081-3-3-n-3b0dbcbd78,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-3b0dbcbd78,},FirstTimestamp:2025-05-17 00:09:08.352830779 +0000 UTC m=+0.421596515,LastTimestamp:2025-05-17 00:09:08.352830779 +0000 UTC m=+0.421596515,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-3b0dbcbd78,}" May 17 00:09:08.361071 kubelet[2295]: I0517 00:09:08.361022 2295 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:09:08.362093 kubelet[2295]: I0517 00:09:08.362063 2295 server.go:317] "Adding debug handlers to kubelet server" May 17 00:09:08.363055 kubelet[2295]: I0517 00:09:08.363020 2295 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:09:08.364939 kubelet[2295]: I0517 00:09:08.364919 2295 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:09:08.365975 kubelet[2295]: E0517 00:09:08.365948 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-3b0dbcbd78\" not found" May 17 00:09:08.367650 kubelet[2295]: I0517 00:09:08.367330 2295 factory.go:223] Registration of the systemd container factory successfully May 17 00:09:08.367721 kubelet[2295]: I0517 00:09:08.367703 2295 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:09:08.368974 kubelet[2295]: E0517 00:09:08.368918 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.99.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-3b0dbcbd78?timeout=10s\": dial tcp 168.119.99.67:6443: connect: connection refused" interval="200ms" May 17 00:09:08.369155 kubelet[2295]: E0517 00:09:08.369134 2295 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:09:08.369594 kubelet[2295]: I0517 00:09:08.369452 2295 reconciler.go:26] "Reconciler: start to sync state" May 17 00:09:08.369594 kubelet[2295]: I0517 00:09:08.369503 2295 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:09:08.369940 kubelet[2295]: E0517 00:09:08.369898 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://168.119.99.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 168.119.99.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 17 00:09:08.370032 kubelet[2295]: I0517 00:09:08.370011 2295 factory.go:223] Registration of the containerd container factory successfully May 17 00:09:08.382266 kubelet[2295]: I0517 00:09:08.382180 2295 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 17 00:09:08.383875 kubelet[2295]: I0517 00:09:08.383819 2295 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 17 00:09:08.383875 kubelet[2295]: I0517 00:09:08.383849 2295 status_manager.go:230] "Starting to sync pod status with apiserver" May 17 00:09:08.384013 kubelet[2295]: I0517 00:09:08.383907 2295 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:09:08.384013 kubelet[2295]: I0517 00:09:08.383916 2295 kubelet.go:2436] "Starting kubelet main sync loop" May 17 00:09:08.384013 kubelet[2295]: E0517 00:09:08.383971 2295 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:09:08.393808 kubelet[2295]: E0517 00:09:08.393730 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://168.119.99.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 168.119.99.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 17 00:09:08.396743 kubelet[2295]: I0517 00:09:08.396716 2295 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:09:08.396743 kubelet[2295]: I0517 00:09:08.396736 2295 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:09:08.396875 kubelet[2295]: I0517 00:09:08.396754 2295 state_mem.go:36] "Initialized new in-memory state store" May 17 00:09:08.399347 kubelet[2295]: I0517 00:09:08.399310 2295 policy_none.go:49] "None policy: Start" May 17 00:09:08.399347 kubelet[2295]: I0517 00:09:08.399346 2295 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:09:08.399425 kubelet[2295]: I0517 00:09:08.399360 2295 state_mem.go:35] "Initializing new in-memory state store" May 17 00:09:08.406556 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 17 00:09:08.419025 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 17 00:09:08.423203 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 17 00:09:08.429105 kubelet[2295]: E0517 00:09:08.429030 2295 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 17 00:09:08.429340 kubelet[2295]: I0517 00:09:08.429304 2295 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:09:08.429380 kubelet[2295]: I0517 00:09:08.429328 2295 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:09:08.431082 kubelet[2295]: I0517 00:09:08.430156 2295 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:09:08.432757 kubelet[2295]: E0517 00:09:08.432401 2295 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:09:08.432757 kubelet[2295]: E0517 00:09:08.432590 2295 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-3-n-3b0dbcbd78\" not found" May 17 00:09:08.503153 systemd[1]: Created slice kubepods-burstable-pod903a616354d1091aabb7077bc8493130.slice - libcontainer container kubepods-burstable-pod903a616354d1091aabb7077bc8493130.slice. May 17 00:09:08.523060 kubelet[2295]: E0517 00:09:08.522756 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-3b0dbcbd78\" not found" node="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:08.525542 systemd[1]: Created slice kubepods-burstable-pod939c2715c52c6ddc83a413b7bde2dd3e.slice - libcontainer container kubepods-burstable-pod939c2715c52c6ddc83a413b7bde2dd3e.slice. May 17 00:09:08.532000 kubelet[2295]: I0517 00:09:08.531555 2295 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:08.532203 kubelet[2295]: E0517 00:09:08.532113 2295 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://168.119.99.67:6443/api/v1/nodes\": dial tcp 168.119.99.67:6443: connect: connection refused" node="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:08.534163 kubelet[2295]: E0517 00:09:08.534031 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-3b0dbcbd78\" not found" node="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:08.537800 systemd[1]: Created slice kubepods-burstable-podccc1b1d82d7461ce626bdfc8455dadf4.slice - libcontainer container kubepods-burstable-podccc1b1d82d7461ce626bdfc8455dadf4.slice. May 17 00:09:08.540573 kubelet[2295]: E0517 00:09:08.540294 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-3b0dbcbd78\" not found" node="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:08.570930 kubelet[2295]: I0517 00:09:08.570382 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ccc1b1d82d7461ce626bdfc8455dadf4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-3b0dbcbd78\" (UID: \"ccc1b1d82d7461ce626bdfc8455dadf4\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:08.570930 kubelet[2295]: I0517 00:09:08.570482 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/903a616354d1091aabb7077bc8493130-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-3b0dbcbd78\" (UID: \"903a616354d1091aabb7077bc8493130\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:08.570930 kubelet[2295]: I0517 00:09:08.570543 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/903a616354d1091aabb7077bc8493130-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-3b0dbcbd78\" (UID: \"903a616354d1091aabb7077bc8493130\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:08.570930 kubelet[2295]: I0517 00:09:08.570570 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/939c2715c52c6ddc83a413b7bde2dd3e-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-3b0dbcbd78\" (UID: \"939c2715c52c6ddc83a413b7bde2dd3e\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:08.570930 kubelet[2295]: I0517 00:09:08.570596 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ccc1b1d82d7461ce626bdfc8455dadf4-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-3b0dbcbd78\" (UID: \"ccc1b1d82d7461ce626bdfc8455dadf4\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:08.571220 kubelet[2295]: I0517 00:09:08.570626 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ccc1b1d82d7461ce626bdfc8455dadf4-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-3b0dbcbd78\" (UID: \"ccc1b1d82d7461ce626bdfc8455dadf4\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:08.571220 kubelet[2295]: I0517 00:09:08.570651 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/903a616354d1091aabb7077bc8493130-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-3b0dbcbd78\" (UID: \"903a616354d1091aabb7077bc8493130\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:08.571220 kubelet[2295]: I0517 00:09:08.570674 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/903a616354d1091aabb7077bc8493130-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-3b0dbcbd78\" (UID: \"903a616354d1091aabb7077bc8493130\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:08.571220 kubelet[2295]: I0517 00:09:08.570701 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/903a616354d1091aabb7077bc8493130-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-3b0dbcbd78\" (UID: \"903a616354d1091aabb7077bc8493130\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:08.571220 kubelet[2295]: E0517 00:09:08.571065 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.99.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-3b0dbcbd78?timeout=10s\": dial tcp 168.119.99.67:6443: connect: connection refused" interval="400ms" May 17 00:09:08.735483 kubelet[2295]: I0517 00:09:08.735377 2295 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:08.736045 kubelet[2295]: E0517 00:09:08.736006 2295 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://168.119.99.67:6443/api/v1/nodes\": dial tcp 168.119.99.67:6443: connect: connection refused" node="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:08.824685 containerd[1478]: time="2025-05-17T00:09:08.824534739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-3b0dbcbd78,Uid:903a616354d1091aabb7077bc8493130,Namespace:kube-system,Attempt:0,}" May 17 00:09:08.836570 containerd[1478]: time="2025-05-17T00:09:08.836456650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-3b0dbcbd78,Uid:939c2715c52c6ddc83a413b7bde2dd3e,Namespace:kube-system,Attempt:0,}" May 17 00:09:08.841916 containerd[1478]: time="2025-05-17T00:09:08.841649698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-3b0dbcbd78,Uid:ccc1b1d82d7461ce626bdfc8455dadf4,Namespace:kube-system,Attempt:0,}" May 17 00:09:08.911373 kubelet[2295]: E0517 00:09:08.911194 2295 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://168.119.99.67:6443/api/v1/namespaces/default/events\": dial tcp 168.119.99.67:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-n-3b0dbcbd78.184027eb9eaf2d3b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-n-3b0dbcbd78,UID:ci-4081-3-3-n-3b0dbcbd78,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-3b0dbcbd78,},FirstTimestamp:2025-05-17 00:09:08.352830779 +0000 UTC m=+0.421596515,LastTimestamp:2025-05-17 00:09:08.352830779 +0000 UTC m=+0.421596515,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-3b0dbcbd78,}" May 17 00:09:08.971926 kubelet[2295]: E0517 00:09:08.971868 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.99.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-3b0dbcbd78?timeout=10s\": dial tcp 168.119.99.67:6443: connect: connection refused" interval="800ms" May 17 00:09:09.140364 kubelet[2295]: I0517 00:09:09.140227 2295 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:09.141200 kubelet[2295]: E0517 00:09:09.141096 2295 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://168.119.99.67:6443/api/v1/nodes\": dial tcp 168.119.99.67:6443: connect: connection refused" node="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:09.251979 kubelet[2295]: E0517 00:09:09.251902 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://168.119.99.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-3b0dbcbd78&limit=500&resourceVersion=0\": dial tcp 168.119.99.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 17 00:09:09.380380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3070793522.mount: Deactivated successfully. May 17 00:09:09.388459 containerd[1478]: time="2025-05-17T00:09:09.387337750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:09:09.389599 containerd[1478]: time="2025-05-17T00:09:09.389562171Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" May 17 00:09:09.392543 containerd[1478]: time="2025-05-17T00:09:09.392412079Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:09:09.395549 containerd[1478]: time="2025-05-17T00:09:09.395519868Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:09:09.395860 containerd[1478]: time="2025-05-17T00:09:09.395804591Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:09:09.397415 containerd[1478]: time="2025-05-17T00:09:09.397381006Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:09:09.398195 containerd[1478]: time="2025-05-17T00:09:09.398153174Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:09:09.401103 containerd[1478]: time="2025-05-17T00:09:09.401063281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:09:09.403483 containerd[1478]: time="2025-05-17T00:09:09.403431744Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 566.829093ms" May 17 00:09:09.405246 containerd[1478]: time="2025-05-17T00:09:09.405071360Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 563.337782ms" May 17 00:09:09.407001 containerd[1478]: time="2025-05-17T00:09:09.406970858Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 582.332758ms" May 17 00:09:09.546014 containerd[1478]: time="2025-05-17T00:09:09.545879828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:09.546014 containerd[1478]: time="2025-05-17T00:09:09.545948909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:09.546458 containerd[1478]: time="2025-05-17T00:09:09.545982989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:09.546458 containerd[1478]: time="2025-05-17T00:09:09.546086590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:09.547117 containerd[1478]: time="2025-05-17T00:09:09.546731396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:09.547117 containerd[1478]: time="2025-05-17T00:09:09.546779557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:09.547341 containerd[1478]: time="2025-05-17T00:09:09.547283361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:09.548004 containerd[1478]: time="2025-05-17T00:09:09.547903967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:09.552404 containerd[1478]: time="2025-05-17T00:09:09.552285049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:09.552404 containerd[1478]: time="2025-05-17T00:09:09.552344530Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:09.552404 containerd[1478]: time="2025-05-17T00:09:09.552356210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:09.554080 containerd[1478]: time="2025-05-17T00:09:09.552433851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:09.574232 systemd[1]: Started cri-containerd-428769dee9f3abda996c6fcda5db938ce38b0c5d51ca26f0b783ed6a1f6d3259.scope - libcontainer container 428769dee9f3abda996c6fcda5db938ce38b0c5d51ca26f0b783ed6a1f6d3259. May 17 00:09:09.583282 systemd[1]: Started cri-containerd-309d3843b96b7d05bc8b29d111cae1d929cfe4cdca5e2cd00ac8b6bede8f9c51.scope - libcontainer container 309d3843b96b7d05bc8b29d111cae1d929cfe4cdca5e2cd00ac8b6bede8f9c51. May 17 00:09:09.586597 systemd[1]: Started cri-containerd-68a042d4d74f52d5a063d39ebce613dc688e20bcc28761c91682f3fff39168f3.scope - libcontainer container 68a042d4d74f52d5a063d39ebce613dc688e20bcc28761c91682f3fff39168f3. May 17 00:09:09.639782 containerd[1478]: time="2025-05-17T00:09:09.639643286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-3b0dbcbd78,Uid:ccc1b1d82d7461ce626bdfc8455dadf4,Namespace:kube-system,Attempt:0,} returns sandbox id \"309d3843b96b7d05bc8b29d111cae1d929cfe4cdca5e2cd00ac8b6bede8f9c51\"" May 17 00:09:09.644412 kubelet[2295]: E0517 00:09:09.643713 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://168.119.99.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 168.119.99.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 17 00:09:09.646016 containerd[1478]: time="2025-05-17T00:09:09.645812585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-3b0dbcbd78,Uid:903a616354d1091aabb7077bc8493130,Namespace:kube-system,Attempt:0,} returns sandbox id \"68a042d4d74f52d5a063d39ebce613dc688e20bcc28761c91682f3fff39168f3\"" May 17 00:09:09.648823 containerd[1478]: time="2025-05-17T00:09:09.648693372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-3b0dbcbd78,Uid:939c2715c52c6ddc83a413b7bde2dd3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"428769dee9f3abda996c6fcda5db938ce38b0c5d51ca26f0b783ed6a1f6d3259\"" May 17 00:09:09.648925 containerd[1478]: time="2025-05-17T00:09:09.648892534Z" level=info msg="CreateContainer within sandbox \"309d3843b96b7d05bc8b29d111cae1d929cfe4cdca5e2cd00ac8b6bede8f9c51\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:09:09.653700 containerd[1478]: time="2025-05-17T00:09:09.653660700Z" level=info msg="CreateContainer within sandbox \"68a042d4d74f52d5a063d39ebce613dc688e20bcc28761c91682f3fff39168f3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:09:09.655945 containerd[1478]: time="2025-05-17T00:09:09.655902802Z" level=info msg="CreateContainer within sandbox \"428769dee9f3abda996c6fcda5db938ce38b0c5d51ca26f0b783ed6a1f6d3259\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:09:09.667692 containerd[1478]: time="2025-05-17T00:09:09.667642954Z" level=info msg="CreateContainer within sandbox \"309d3843b96b7d05bc8b29d111cae1d929cfe4cdca5e2cd00ac8b6bede8f9c51\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"883574d3bf7dd88b71a882337fad8f484c07ae9128f95a05114e404d20249587\"" May 17 00:09:09.668707 containerd[1478]: time="2025-05-17T00:09:09.668675364Z" level=info msg="StartContainer for \"883574d3bf7dd88b71a882337fad8f484c07ae9128f95a05114e404d20249587\"" May 17 00:09:09.677223 containerd[1478]: time="2025-05-17T00:09:09.677111645Z" level=info msg="CreateContainer within sandbox \"68a042d4d74f52d5a063d39ebce613dc688e20bcc28761c91682f3fff39168f3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2cf8d917647d7ce7df44ee8b808684d0f1175b323890700164aa1c088c2f2536\"" May 17 00:09:09.680545 containerd[1478]: time="2025-05-17T00:09:09.679191184Z" level=info msg="StartContainer for \"2cf8d917647d7ce7df44ee8b808684d0f1175b323890700164aa1c088c2f2536\"" May 17 00:09:09.690517 containerd[1478]: time="2025-05-17T00:09:09.690431492Z" level=info msg="CreateContainer within sandbox \"428769dee9f3abda996c6fcda5db938ce38b0c5d51ca26f0b783ed6a1f6d3259\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cf0700acc9095c40be42ecaa905f6561164145eeed06c39f030f9f7a08d41d06\"" May 17 00:09:09.691879 containerd[1478]: time="2025-05-17T00:09:09.691823265Z" level=info msg="StartContainer for \"cf0700acc9095c40be42ecaa905f6561164145eeed06c39f030f9f7a08d41d06\"" May 17 00:09:09.695649 systemd[1]: Started cri-containerd-883574d3bf7dd88b71a882337fad8f484c07ae9128f95a05114e404d20249587.scope - libcontainer container 883574d3bf7dd88b71a882337fad8f484c07ae9128f95a05114e404d20249587. May 17 00:09:09.722696 systemd[1]: Started cri-containerd-2cf8d917647d7ce7df44ee8b808684d0f1175b323890700164aa1c088c2f2536.scope - libcontainer container 2cf8d917647d7ce7df44ee8b808684d0f1175b323890700164aa1c088c2f2536. May 17 00:09:09.739805 systemd[1]: Started cri-containerd-cf0700acc9095c40be42ecaa905f6561164145eeed06c39f030f9f7a08d41d06.scope - libcontainer container cf0700acc9095c40be42ecaa905f6561164145eeed06c39f030f9f7a08d41d06. May 17 00:09:09.767880 containerd[1478]: time="2025-05-17T00:09:09.767785353Z" level=info msg="StartContainer for \"883574d3bf7dd88b71a882337fad8f484c07ae9128f95a05114e404d20249587\" returns successfully" May 17 00:09:09.773850 kubelet[2295]: E0517 00:09:09.773809 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.99.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-3b0dbcbd78?timeout=10s\": dial tcp 168.119.99.67:6443: connect: connection refused" interval="1.6s" May 17 00:09:09.786595 containerd[1478]: time="2025-05-17T00:09:09.786479332Z" level=info msg="StartContainer for \"2cf8d917647d7ce7df44ee8b808684d0f1175b323890700164aa1c088c2f2536\" returns successfully" May 17 00:09:09.814931 containerd[1478]: time="2025-05-17T00:09:09.814569721Z" level=info msg="StartContainer for \"cf0700acc9095c40be42ecaa905f6561164145eeed06c39f030f9f7a08d41d06\" returns successfully" May 17 00:09:09.883388 kubelet[2295]: E0517 00:09:09.883338 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://168.119.99.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 168.119.99.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 17 00:09:09.943300 kubelet[2295]: I0517 00:09:09.943201 2295 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:10.404688 kubelet[2295]: E0517 00:09:10.404651 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-3b0dbcbd78\" not found" node="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:10.413890 kubelet[2295]: E0517 00:09:10.410078 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-3b0dbcbd78\" not found" node="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:10.413890 kubelet[2295]: E0517 00:09:10.412424 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-3b0dbcbd78\" not found" node="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:11.416634 kubelet[2295]: E0517 00:09:11.416115 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-3b0dbcbd78\" not found" node="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:11.416634 kubelet[2295]: E0517 00:09:11.416463 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-3b0dbcbd78\" not found" node="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:12.357641 kubelet[2295]: I0517 00:09:12.357369 2295 apiserver.go:52] "Watching apiserver" May 17 00:09:12.467198 kubelet[2295]: E0517 00:09:12.467161 2295 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-3-n-3b0dbcbd78\" not found" node="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:12.470357 kubelet[2295]: I0517 00:09:12.470320 2295 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:09:12.556754 kubelet[2295]: I0517 00:09:12.556369 2295 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:12.568033 kubelet[2295]: I0517 00:09:12.567522 2295 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:12.641338 kubelet[2295]: E0517 00:09:12.641216 2295 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-3-n-3b0dbcbd78\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:12.641549 kubelet[2295]: I0517 00:09:12.641484 2295 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:12.645272 kubelet[2295]: E0517 00:09:12.645126 2295 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-3-n-3b0dbcbd78\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:12.645272 kubelet[2295]: I0517 00:09:12.645156 2295 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:12.650509 kubelet[2295]: E0517 00:09:12.648564 2295 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-3-n-3b0dbcbd78\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:14.882116 systemd[1]: Reloading requested from client PID 2579 ('systemctl') (unit session-7.scope)... May 17 00:09:14.882137 systemd[1]: Reloading... May 17 00:09:14.975542 zram_generator::config[2616]: No configuration found. May 17 00:09:15.094115 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:09:15.194030 systemd[1]: Reloading finished in 311 ms. May 17 00:09:15.232020 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:09:15.249580 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:09:15.250032 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:09:15.262050 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:09:15.393189 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:09:15.405920 (kubelet)[2663]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:09:15.452666 kubelet[2663]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:09:15.452666 kubelet[2663]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:09:15.452666 kubelet[2663]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:09:15.453452 kubelet[2663]: I0517 00:09:15.452661 2663 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:09:15.461485 kubelet[2663]: I0517 00:09:15.461105 2663 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 17 00:09:15.461485 kubelet[2663]: I0517 00:09:15.461137 2663 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:09:15.461485 kubelet[2663]: I0517 00:09:15.461377 2663 server.go:956] "Client rotation is on, will bootstrap in background" May 17 00:09:15.464707 kubelet[2663]: I0517 00:09:15.463383 2663 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 17 00:09:15.466977 kubelet[2663]: I0517 00:09:15.466871 2663 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:09:15.475789 kubelet[2663]: E0517 00:09:15.475654 2663 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:09:15.475789 kubelet[2663]: I0517 00:09:15.475686 2663 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:09:15.479738 kubelet[2663]: I0517 00:09:15.479707 2663 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:09:15.479976 kubelet[2663]: I0517 00:09:15.479944 2663 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:09:15.480153 kubelet[2663]: I0517 00:09:15.479978 2663 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-3b0dbcbd78","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:09:15.480264 kubelet[2663]: I0517 00:09:15.480164 2663 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:09:15.480264 kubelet[2663]: I0517 00:09:15.480174 2663 container_manager_linux.go:303] "Creating device plugin manager" May 17 00:09:15.480264 kubelet[2663]: I0517 00:09:15.480219 2663 state_mem.go:36] "Initialized new in-memory state store" May 17 00:09:15.480570 kubelet[2663]: I0517 00:09:15.480554 2663 kubelet.go:480] "Attempting to sync node with API server" May 17 00:09:15.480621 kubelet[2663]: I0517 00:09:15.480575 2663 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:09:15.480621 kubelet[2663]: I0517 00:09:15.480609 2663 kubelet.go:386] "Adding apiserver pod source" May 17 00:09:15.480672 kubelet[2663]: I0517 00:09:15.480626 2663 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:09:15.485961 kubelet[2663]: I0517 00:09:15.485839 2663 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:09:15.486634 kubelet[2663]: I0517 00:09:15.486605 2663 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 17 00:09:15.492150 kubelet[2663]: I0517 00:09:15.491771 2663 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:09:15.492150 kubelet[2663]: I0517 00:09:15.491822 2663 server.go:1289] "Started kubelet" May 17 00:09:15.494978 kubelet[2663]: I0517 00:09:15.494372 2663 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:09:15.508103 kubelet[2663]: I0517 00:09:15.507597 2663 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:09:15.508403 kubelet[2663]: I0517 00:09:15.508375 2663 server.go:317] "Adding debug handlers to kubelet server" May 17 00:09:15.511594 kubelet[2663]: I0517 00:09:15.511495 2663 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:09:15.511720 kubelet[2663]: I0517 00:09:15.511713 2663 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:09:15.511991 kubelet[2663]: I0517 00:09:15.511962 2663 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:09:15.515307 kubelet[2663]: I0517 00:09:15.513932 2663 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:09:15.515307 kubelet[2663]: E0517 00:09:15.514143 2663 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-3b0dbcbd78\" not found" May 17 00:09:15.516146 kubelet[2663]: I0517 00:09:15.515877 2663 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:09:15.516146 kubelet[2663]: I0517 00:09:15.515992 2663 reconciler.go:26] "Reconciler: start to sync state" May 17 00:09:15.518301 kubelet[2663]: I0517 00:09:15.517621 2663 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 17 00:09:15.518535 kubelet[2663]: I0517 00:09:15.518481 2663 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 17 00:09:15.518535 kubelet[2663]: I0517 00:09:15.518521 2663 status_manager.go:230] "Starting to sync pod status with apiserver" May 17 00:09:15.518599 kubelet[2663]: I0517 00:09:15.518544 2663 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:09:15.518599 kubelet[2663]: I0517 00:09:15.518550 2663 kubelet.go:2436] "Starting kubelet main sync loop" May 17 00:09:15.518599 kubelet[2663]: E0517 00:09:15.518587 2663 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:09:15.528768 kubelet[2663]: I0517 00:09:15.528734 2663 factory.go:223] Registration of the systemd container factory successfully May 17 00:09:15.528980 kubelet[2663]: I0517 00:09:15.528833 2663 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:09:15.535430 kubelet[2663]: E0517 00:09:15.534691 2663 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:09:15.536780 kubelet[2663]: I0517 00:09:15.536742 2663 factory.go:223] Registration of the containerd container factory successfully May 17 00:09:15.580559 kubelet[2663]: I0517 00:09:15.580530 2663 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:09:15.581372 kubelet[2663]: I0517 00:09:15.580747 2663 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:09:15.581372 kubelet[2663]: I0517 00:09:15.580774 2663 state_mem.go:36] "Initialized new in-memory state store" May 17 00:09:15.581372 kubelet[2663]: I0517 00:09:15.580917 2663 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:09:15.581372 kubelet[2663]: I0517 00:09:15.580927 2663 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:09:15.581372 kubelet[2663]: I0517 00:09:15.580944 2663 policy_none.go:49] "None policy: Start" May 17 00:09:15.581372 kubelet[2663]: I0517 00:09:15.580954 2663 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:09:15.581372 kubelet[2663]: I0517 00:09:15.580962 2663 state_mem.go:35] "Initializing new in-memory state store" May 17 00:09:15.581372 kubelet[2663]: I0517 00:09:15.581039 2663 state_mem.go:75] "Updated machine memory state" May 17 00:09:15.586803 kubelet[2663]: E0517 00:09:15.585793 2663 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 17 00:09:15.586803 kubelet[2663]: I0517 00:09:15.585996 2663 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:09:15.586803 kubelet[2663]: I0517 00:09:15.586009 2663 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:09:15.586803 kubelet[2663]: I0517 00:09:15.586697 2663 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:09:15.589530 kubelet[2663]: E0517 00:09:15.589490 2663 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:09:15.620067 kubelet[2663]: I0517 00:09:15.620007 2663 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:15.621370 kubelet[2663]: I0517 00:09:15.621316 2663 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:15.622155 kubelet[2663]: I0517 00:09:15.622117 2663 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:15.689896 kubelet[2663]: I0517 00:09:15.689839 2663 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:15.699613 kubelet[2663]: I0517 00:09:15.699566 2663 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:15.699905 kubelet[2663]: I0517 00:09:15.699679 2663 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:15.716539 kubelet[2663]: I0517 00:09:15.716280 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/939c2715c52c6ddc83a413b7bde2dd3e-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-3b0dbcbd78\" (UID: \"939c2715c52c6ddc83a413b7bde2dd3e\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:15.817166 kubelet[2663]: I0517 00:09:15.817075 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ccc1b1d82d7461ce626bdfc8455dadf4-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-3b0dbcbd78\" (UID: \"ccc1b1d82d7461ce626bdfc8455dadf4\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:15.817166 kubelet[2663]: I0517 00:09:15.817152 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ccc1b1d82d7461ce626bdfc8455dadf4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-3b0dbcbd78\" (UID: \"ccc1b1d82d7461ce626bdfc8455dadf4\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:15.817369 kubelet[2663]: I0517 00:09:15.817201 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/903a616354d1091aabb7077bc8493130-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-3b0dbcbd78\" (UID: \"903a616354d1091aabb7077bc8493130\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:15.817369 kubelet[2663]: I0517 00:09:15.817235 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/903a616354d1091aabb7077bc8493130-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-3b0dbcbd78\" (UID: \"903a616354d1091aabb7077bc8493130\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:15.817369 kubelet[2663]: I0517 00:09:15.817274 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/903a616354d1091aabb7077bc8493130-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-3b0dbcbd78\" (UID: \"903a616354d1091aabb7077bc8493130\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:15.817369 kubelet[2663]: I0517 00:09:15.817309 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/903a616354d1091aabb7077bc8493130-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-3b0dbcbd78\" (UID: \"903a616354d1091aabb7077bc8493130\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:15.817584 kubelet[2663]: I0517 00:09:15.817377 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ccc1b1d82d7461ce626bdfc8455dadf4-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-3b0dbcbd78\" (UID: \"ccc1b1d82d7461ce626bdfc8455dadf4\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:15.817584 kubelet[2663]: I0517 00:09:15.817412 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/903a616354d1091aabb7077bc8493130-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-3b0dbcbd78\" (UID: \"903a616354d1091aabb7077bc8493130\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:16.482564 kubelet[2663]: I0517 00:09:16.482517 2663 apiserver.go:52] "Watching apiserver" May 17 00:09:16.516745 kubelet[2663]: I0517 00:09:16.516625 2663 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:09:16.565165 kubelet[2663]: I0517 00:09:16.561279 2663 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:16.570937 kubelet[2663]: E0517 00:09:16.570903 2663 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-3-n-3b0dbcbd78\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:16.595365 kubelet[2663]: I0517 00:09:16.594410 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-3-n-3b0dbcbd78" podStartSLOduration=1.594391747 podStartE2EDuration="1.594391747s" podCreationTimestamp="2025-05-17 00:09:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:09:16.583374739 +0000 UTC m=+1.173217392" watchObservedRunningTime="2025-05-17 00:09:16.594391747 +0000 UTC m=+1.184234360" May 17 00:09:16.618260 kubelet[2663]: I0517 00:09:16.618092 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-3b0dbcbd78" podStartSLOduration=1.6180696220000002 podStartE2EDuration="1.618069622s" podCreationTimestamp="2025-05-17 00:09:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:09:16.614588742 +0000 UTC m=+1.204431395" watchObservedRunningTime="2025-05-17 00:09:16.618069622 +0000 UTC m=+1.207912275" May 17 00:09:16.619005 kubelet[2663]: I0517 00:09:16.618863 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-3-n-3b0dbcbd78" podStartSLOduration=1.6188514710000002 podStartE2EDuration="1.618851471s" podCreationTimestamp="2025-05-17 00:09:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:09:16.595907765 +0000 UTC m=+1.185750418" watchObservedRunningTime="2025-05-17 00:09:16.618851471 +0000 UTC m=+1.208694204" May 17 00:09:21.448499 kubelet[2663]: I0517 00:09:21.448386 2663 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:09:21.449371 containerd[1478]: time="2025-05-17T00:09:21.449239080Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:09:21.449845 kubelet[2663]: I0517 00:09:21.449599 2663 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:09:22.270507 systemd[1]: Created slice kubepods-besteffort-podddaa28be_0ea6_45e5_a5f3_b630fc5cfe1f.slice - libcontainer container kubepods-besteffort-podddaa28be_0ea6_45e5_a5f3_b630fc5cfe1f.slice. May 17 00:09:22.360769 kubelet[2663]: I0517 00:09:22.360695 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ddaa28be-0ea6-45e5-a5f3-b630fc5cfe1f-kube-proxy\") pod \"kube-proxy-j4xtl\" (UID: \"ddaa28be-0ea6-45e5-a5f3-b630fc5cfe1f\") " pod="kube-system/kube-proxy-j4xtl" May 17 00:09:22.360769 kubelet[2663]: I0517 00:09:22.360776 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddaa28be-0ea6-45e5-a5f3-b630fc5cfe1f-xtables-lock\") pod \"kube-proxy-j4xtl\" (UID: \"ddaa28be-0ea6-45e5-a5f3-b630fc5cfe1f\") " pod="kube-system/kube-proxy-j4xtl" May 17 00:09:22.360951 kubelet[2663]: I0517 00:09:22.360811 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbm4n\" (UniqueName: \"kubernetes.io/projected/ddaa28be-0ea6-45e5-a5f3-b630fc5cfe1f-kube-api-access-xbm4n\") pod \"kube-proxy-j4xtl\" (UID: \"ddaa28be-0ea6-45e5-a5f3-b630fc5cfe1f\") " pod="kube-system/kube-proxy-j4xtl" May 17 00:09:22.360951 kubelet[2663]: I0517 00:09:22.360866 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddaa28be-0ea6-45e5-a5f3-b630fc5cfe1f-lib-modules\") pod \"kube-proxy-j4xtl\" (UID: \"ddaa28be-0ea6-45e5-a5f3-b630fc5cfe1f\") " pod="kube-system/kube-proxy-j4xtl" May 17 00:09:22.580007 containerd[1478]: time="2025-05-17T00:09:22.579709497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j4xtl,Uid:ddaa28be-0ea6-45e5-a5f3-b630fc5cfe1f,Namespace:kube-system,Attempt:0,}" May 17 00:09:22.614342 containerd[1478]: time="2025-05-17T00:09:22.613247935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:22.615796 containerd[1478]: time="2025-05-17T00:09:22.615467804Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:22.615796 containerd[1478]: time="2025-05-17T00:09:22.615495484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:22.615796 containerd[1478]: time="2025-05-17T00:09:22.615636806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:22.643776 systemd[1]: Started cri-containerd-9fb56ff0dd5b267e9ba74b715894319da34e355ff241bfbae90bcf65b5744f88.scope - libcontainer container 9fb56ff0dd5b267e9ba74b715894319da34e355ff241bfbae90bcf65b5744f88. May 17 00:09:22.677407 systemd[1]: Created slice kubepods-besteffort-pod5cea200b_4fa9_49fa_b724_ee1020c42fea.slice - libcontainer container kubepods-besteffort-pod5cea200b_4fa9_49fa_b724_ee1020c42fea.slice. May 17 00:09:22.706893 containerd[1478]: time="2025-05-17T00:09:22.706850755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j4xtl,Uid:ddaa28be-0ea6-45e5-a5f3-b630fc5cfe1f,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fb56ff0dd5b267e9ba74b715894319da34e355ff241bfbae90bcf65b5744f88\"" May 17 00:09:22.712381 containerd[1478]: time="2025-05-17T00:09:22.712321946Z" level=info msg="CreateContainer within sandbox \"9fb56ff0dd5b267e9ba74b715894319da34e355ff241bfbae90bcf65b5744f88\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:09:22.728116 containerd[1478]: time="2025-05-17T00:09:22.727990271Z" level=info msg="CreateContainer within sandbox \"9fb56ff0dd5b267e9ba74b715894319da34e355ff241bfbae90bcf65b5744f88\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1a5ac85e33ecc86b942a35332f03ee95e2bf15af67a8e25ae311bae4ab56ffe5\"" May 17 00:09:22.728881 containerd[1478]: time="2025-05-17T00:09:22.728855082Z" level=info msg="StartContainer for \"1a5ac85e33ecc86b942a35332f03ee95e2bf15af67a8e25ae311bae4ab56ffe5\"" May 17 00:09:22.757681 systemd[1]: Started cri-containerd-1a5ac85e33ecc86b942a35332f03ee95e2bf15af67a8e25ae311bae4ab56ffe5.scope - libcontainer container 1a5ac85e33ecc86b942a35332f03ee95e2bf15af67a8e25ae311bae4ab56ffe5. May 17 00:09:22.764154 kubelet[2663]: I0517 00:09:22.763844 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5cea200b-4fa9-49fa-b724-ee1020c42fea-var-lib-calico\") pod \"tigera-operator-844669ff44-gzmpv\" (UID: \"5cea200b-4fa9-49fa-b724-ee1020c42fea\") " pod="tigera-operator/tigera-operator-844669ff44-gzmpv" May 17 00:09:22.764154 kubelet[2663]: I0517 00:09:22.763893 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79zgf\" (UniqueName: \"kubernetes.io/projected/5cea200b-4fa9-49fa-b724-ee1020c42fea-kube-api-access-79zgf\") pod \"tigera-operator-844669ff44-gzmpv\" (UID: \"5cea200b-4fa9-49fa-b724-ee1020c42fea\") " pod="tigera-operator/tigera-operator-844669ff44-gzmpv" May 17 00:09:22.791640 containerd[1478]: time="2025-05-17T00:09:22.791594820Z" level=info msg="StartContainer for \"1a5ac85e33ecc86b942a35332f03ee95e2bf15af67a8e25ae311bae4ab56ffe5\" returns successfully" May 17 00:09:22.982805 containerd[1478]: time="2025-05-17T00:09:22.981966262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-gzmpv,Uid:5cea200b-4fa9-49fa-b724-ee1020c42fea,Namespace:tigera-operator,Attempt:0,}" May 17 00:09:23.005856 containerd[1478]: time="2025-05-17T00:09:23.005751333Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:23.006091 containerd[1478]: time="2025-05-17T00:09:23.005822734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:23.006091 containerd[1478]: time="2025-05-17T00:09:23.005838734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:23.006091 containerd[1478]: time="2025-05-17T00:09:23.005917615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:23.026768 systemd[1]: Started cri-containerd-190677fd6398d8ebae1e044896fa1c7610cb49819e7402ee03435dab380606bf.scope - libcontainer container 190677fd6398d8ebae1e044896fa1c7610cb49819e7402ee03435dab380606bf. May 17 00:09:23.065294 containerd[1478]: time="2025-05-17T00:09:23.065023638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-gzmpv,Uid:5cea200b-4fa9-49fa-b724-ee1020c42fea,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"190677fd6398d8ebae1e044896fa1c7610cb49819e7402ee03435dab380606bf\"" May 17 00:09:23.068582 containerd[1478]: time="2025-05-17T00:09:23.067713994Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 17 00:09:24.522684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1199523584.mount: Deactivated successfully. May 17 00:09:25.126910 containerd[1478]: time="2025-05-17T00:09:25.125381431Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:25.126910 containerd[1478]: time="2025-05-17T00:09:25.126659088Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=22143480" May 17 00:09:25.126910 containerd[1478]: time="2025-05-17T00:09:25.126837571Z" level=info msg="ImageCreate event name:\"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:25.129433 containerd[1478]: time="2025-05-17T00:09:25.129385726Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:25.130622 containerd[1478]: time="2025-05-17T00:09:25.130577742Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"22139475\" in 2.062821548s" May 17 00:09:25.130622 containerd[1478]: time="2025-05-17T00:09:25.130621623Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\"" May 17 00:09:25.135498 containerd[1478]: time="2025-05-17T00:09:25.135462329Z" level=info msg="CreateContainer within sandbox \"190677fd6398d8ebae1e044896fa1c7610cb49819e7402ee03435dab380606bf\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 17 00:09:25.155611 containerd[1478]: time="2025-05-17T00:09:25.155563003Z" level=info msg="CreateContainer within sandbox \"190677fd6398d8ebae1e044896fa1c7610cb49819e7402ee03435dab380606bf\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"47552ac5473cb571f9fc0f762238a4500f690b49a7466a753be6010455272a0d\"" May 17 00:09:25.156773 containerd[1478]: time="2025-05-17T00:09:25.156749979Z" level=info msg="StartContainer for \"47552ac5473cb571f9fc0f762238a4500f690b49a7466a753be6010455272a0d\"" May 17 00:09:25.185668 systemd[1]: Started cri-containerd-47552ac5473cb571f9fc0f762238a4500f690b49a7466a753be6010455272a0d.scope - libcontainer container 47552ac5473cb571f9fc0f762238a4500f690b49a7466a753be6010455272a0d. May 17 00:09:25.224329 containerd[1478]: time="2025-05-17T00:09:25.224272741Z" level=info msg="StartContainer for \"47552ac5473cb571f9fc0f762238a4500f690b49a7466a753be6010455272a0d\" returns successfully" May 17 00:09:25.531731 kubelet[2663]: I0517 00:09:25.531568 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j4xtl" podStartSLOduration=3.5315352559999997 podStartE2EDuration="3.531535256s" podCreationTimestamp="2025-05-17 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:09:23.592109061 +0000 UTC m=+8.181951674" watchObservedRunningTime="2025-05-17 00:09:25.531535256 +0000 UTC m=+10.121377909" May 17 00:09:25.620475 kubelet[2663]: I0517 00:09:25.620356 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-gzmpv" podStartSLOduration=1.555206451 podStartE2EDuration="3.620339709s" podCreationTimestamp="2025-05-17 00:09:22 +0000 UTC" firstStartedPulling="2025-05-17 00:09:23.067096466 +0000 UTC m=+7.656939119" lastFinishedPulling="2025-05-17 00:09:25.132229764 +0000 UTC m=+9.722072377" observedRunningTime="2025-05-17 00:09:25.601720054 +0000 UTC m=+10.191562667" watchObservedRunningTime="2025-05-17 00:09:25.620339709 +0000 UTC m=+10.210182362" May 17 00:09:31.284032 sudo[1840]: pam_unix(sudo:session): session closed for user root May 17 00:09:31.446969 sshd[1837]: pam_unix(sshd:session): session closed for user core May 17 00:09:31.454699 systemd[1]: sshd@6-168.119.99.67:22-139.178.68.195:42262.service: Deactivated successfully. May 17 00:09:31.462844 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:09:31.463129 systemd[1]: session-7.scope: Consumed 6.891s CPU time, 151.4M memory peak, 0B memory swap peak. May 17 00:09:31.465459 systemd-logind[1457]: Session 7 logged out. Waiting for processes to exit. May 17 00:09:31.467243 systemd-logind[1457]: Removed session 7. May 17 00:09:36.061476 systemd[1]: Created slice kubepods-besteffort-pod15ab2e96_ec8e_45d0_a9fa_fe53fc76a6ab.slice - libcontainer container kubepods-besteffort-pod15ab2e96_ec8e_45d0_a9fa_fe53fc76a6ab.slice. May 17 00:09:36.155226 kubelet[2663]: I0517 00:09:36.155056 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/15ab2e96-ec8e-45d0-a9fa-fe53fc76a6ab-typha-certs\") pod \"calico-typha-79c79c5d94-cdcvv\" (UID: \"15ab2e96-ec8e-45d0-a9fa-fe53fc76a6ab\") " pod="calico-system/calico-typha-79c79c5d94-cdcvv" May 17 00:09:36.155226 kubelet[2663]: I0517 00:09:36.155099 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f5n4\" (UniqueName: \"kubernetes.io/projected/15ab2e96-ec8e-45d0-a9fa-fe53fc76a6ab-kube-api-access-8f5n4\") pod \"calico-typha-79c79c5d94-cdcvv\" (UID: \"15ab2e96-ec8e-45d0-a9fa-fe53fc76a6ab\") " pod="calico-system/calico-typha-79c79c5d94-cdcvv" May 17 00:09:36.155226 kubelet[2663]: I0517 00:09:36.155117 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15ab2e96-ec8e-45d0-a9fa-fe53fc76a6ab-tigera-ca-bundle\") pod \"calico-typha-79c79c5d94-cdcvv\" (UID: \"15ab2e96-ec8e-45d0-a9fa-fe53fc76a6ab\") " pod="calico-system/calico-typha-79c79c5d94-cdcvv" May 17 00:09:36.223462 systemd[1]: Created slice kubepods-besteffort-podd794f1db_00ed_4dd4_8f9d_aa17550c9e80.slice - libcontainer container kubepods-besteffort-podd794f1db_00ed_4dd4_8f9d_aa17550c9e80.slice. May 17 00:09:36.256479 kubelet[2663]: I0517 00:09:36.255813 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d794f1db-00ed-4dd4-8f9d-aa17550c9e80-node-certs\") pod \"calico-node-qvzs6\" (UID: \"d794f1db-00ed-4dd4-8f9d-aa17550c9e80\") " pod="calico-system/calico-node-qvzs6" May 17 00:09:36.256479 kubelet[2663]: I0517 00:09:36.255860 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d794f1db-00ed-4dd4-8f9d-aa17550c9e80-var-lib-calico\") pod \"calico-node-qvzs6\" (UID: \"d794f1db-00ed-4dd4-8f9d-aa17550c9e80\") " pod="calico-system/calico-node-qvzs6" May 17 00:09:36.256479 kubelet[2663]: I0517 00:09:36.255904 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d794f1db-00ed-4dd4-8f9d-aa17550c9e80-var-run-calico\") pod \"calico-node-qvzs6\" (UID: \"d794f1db-00ed-4dd4-8f9d-aa17550c9e80\") " pod="calico-system/calico-node-qvzs6" May 17 00:09:36.256479 kubelet[2663]: I0517 00:09:36.255919 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmkkh\" (UniqueName: \"kubernetes.io/projected/d794f1db-00ed-4dd4-8f9d-aa17550c9e80-kube-api-access-bmkkh\") pod \"calico-node-qvzs6\" (UID: \"d794f1db-00ed-4dd4-8f9d-aa17550c9e80\") " pod="calico-system/calico-node-qvzs6" May 17 00:09:36.256479 kubelet[2663]: I0517 00:09:36.255936 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d794f1db-00ed-4dd4-8f9d-aa17550c9e80-cni-net-dir\") pod \"calico-node-qvzs6\" (UID: \"d794f1db-00ed-4dd4-8f9d-aa17550c9e80\") " pod="calico-system/calico-node-qvzs6" May 17 00:09:36.256717 kubelet[2663]: I0517 00:09:36.255949 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d794f1db-00ed-4dd4-8f9d-aa17550c9e80-tigera-ca-bundle\") pod \"calico-node-qvzs6\" (UID: \"d794f1db-00ed-4dd4-8f9d-aa17550c9e80\") " pod="calico-system/calico-node-qvzs6" May 17 00:09:36.256717 kubelet[2663]: I0517 00:09:36.255963 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d794f1db-00ed-4dd4-8f9d-aa17550c9e80-xtables-lock\") pod \"calico-node-qvzs6\" (UID: \"d794f1db-00ed-4dd4-8f9d-aa17550c9e80\") " pod="calico-system/calico-node-qvzs6" May 17 00:09:36.256717 kubelet[2663]: I0517 00:09:36.255978 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d794f1db-00ed-4dd4-8f9d-aa17550c9e80-cni-log-dir\") pod \"calico-node-qvzs6\" (UID: \"d794f1db-00ed-4dd4-8f9d-aa17550c9e80\") " pod="calico-system/calico-node-qvzs6" May 17 00:09:36.256717 kubelet[2663]: I0517 00:09:36.255998 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d794f1db-00ed-4dd4-8f9d-aa17550c9e80-cni-bin-dir\") pod \"calico-node-qvzs6\" (UID: \"d794f1db-00ed-4dd4-8f9d-aa17550c9e80\") " pod="calico-system/calico-node-qvzs6" May 17 00:09:36.256717 kubelet[2663]: I0517 00:09:36.256012 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d794f1db-00ed-4dd4-8f9d-aa17550c9e80-flexvol-driver-host\") pod \"calico-node-qvzs6\" (UID: \"d794f1db-00ed-4dd4-8f9d-aa17550c9e80\") " pod="calico-system/calico-node-qvzs6" May 17 00:09:36.256824 kubelet[2663]: I0517 00:09:36.256035 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d794f1db-00ed-4dd4-8f9d-aa17550c9e80-lib-modules\") pod \"calico-node-qvzs6\" (UID: \"d794f1db-00ed-4dd4-8f9d-aa17550c9e80\") " pod="calico-system/calico-node-qvzs6" May 17 00:09:36.256824 kubelet[2663]: I0517 00:09:36.256048 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d794f1db-00ed-4dd4-8f9d-aa17550c9e80-policysync\") pod \"calico-node-qvzs6\" (UID: \"d794f1db-00ed-4dd4-8f9d-aa17550c9e80\") " pod="calico-system/calico-node-qvzs6" May 17 00:09:36.358341 kubelet[2663]: E0517 00:09:36.358188 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g9lf2" podUID="bf7f5509-6eae-41c9-a82c-194eb8fdf825" May 17 00:09:36.364376 kubelet[2663]: E0517 00:09:36.364314 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.364376 kubelet[2663]: W0517 00:09:36.364354 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.364376 kubelet[2663]: E0517 00:09:36.364380 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.366086 containerd[1478]: time="2025-05-17T00:09:36.365729131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79c79c5d94-cdcvv,Uid:15ab2e96-ec8e-45d0-a9fa-fe53fc76a6ab,Namespace:calico-system,Attempt:0,}" May 17 00:09:36.368477 kubelet[2663]: E0517 00:09:36.367491 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.368477 kubelet[2663]: W0517 00:09:36.367518 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.368477 kubelet[2663]: E0517 00:09:36.367557 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.371914 kubelet[2663]: E0517 00:09:36.371877 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.371914 kubelet[2663]: W0517 00:09:36.371902 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.372037 kubelet[2663]: E0517 00:09:36.371922 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.375048 kubelet[2663]: E0517 00:09:36.375010 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.375048 kubelet[2663]: W0517 00:09:36.375044 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.375580 kubelet[2663]: E0517 00:09:36.375065 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.375883 kubelet[2663]: E0517 00:09:36.375867 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.376081 kubelet[2663]: W0517 00:09:36.375998 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.376081 kubelet[2663]: E0517 00:09:36.376017 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.376426 kubelet[2663]: E0517 00:09:36.376399 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.376834 kubelet[2663]: W0517 00:09:36.376475 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.376834 kubelet[2663]: E0517 00:09:36.376489 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.377127 kubelet[2663]: E0517 00:09:36.377031 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.377127 kubelet[2663]: W0517 00:09:36.377045 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.377127 kubelet[2663]: E0517 00:09:36.377073 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.377793 kubelet[2663]: E0517 00:09:36.377724 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.377999 kubelet[2663]: W0517 00:09:36.377894 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.377999 kubelet[2663]: E0517 00:09:36.377913 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.378254 kubelet[2663]: E0517 00:09:36.378218 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.378479 kubelet[2663]: W0517 00:09:36.378350 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.378479 kubelet[2663]: E0517 00:09:36.378368 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.379118 kubelet[2663]: E0517 00:09:36.379038 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.379118 kubelet[2663]: W0517 00:09:36.379059 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.379118 kubelet[2663]: E0517 00:09:36.379072 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.401457 kubelet[2663]: E0517 00:09:36.401338 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.401457 kubelet[2663]: W0517 00:09:36.401363 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.401457 kubelet[2663]: E0517 00:09:36.401393 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.414827 containerd[1478]: time="2025-05-17T00:09:36.413242706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:36.414827 containerd[1478]: time="2025-05-17T00:09:36.413301667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:36.414827 containerd[1478]: time="2025-05-17T00:09:36.413322508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:36.414827 containerd[1478]: time="2025-05-17T00:09:36.413405389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:36.438476 systemd[1]: Started cri-containerd-fb20c80fdbb0cd62b327a191034ec7ac1f760ca0acd9ffe5cfa16d5f66bf4447.scope - libcontainer container fb20c80fdbb0cd62b327a191034ec7ac1f760ca0acd9ffe5cfa16d5f66bf4447. May 17 00:09:36.442882 kubelet[2663]: E0517 00:09:36.442706 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.442882 kubelet[2663]: W0517 00:09:36.442751 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.442882 kubelet[2663]: E0517 00:09:36.442777 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.443538 kubelet[2663]: E0517 00:09:36.443506 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.444182 kubelet[2663]: W0517 00:09:36.443697 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.444182 kubelet[2663]: E0517 00:09:36.443770 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.444511 kubelet[2663]: E0517 00:09:36.444404 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.444511 kubelet[2663]: W0517 00:09:36.444443 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.444511 kubelet[2663]: E0517 00:09:36.444457 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.444914 kubelet[2663]: E0517 00:09:36.444848 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.444914 kubelet[2663]: W0517 00:09:36.444861 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.444914 kubelet[2663]: E0517 00:09:36.444871 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.445316 kubelet[2663]: E0517 00:09:36.445252 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.446477 kubelet[2663]: W0517 00:09:36.445371 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.446477 kubelet[2663]: E0517 00:09:36.445404 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.446873 kubelet[2663]: E0517 00:09:36.446846 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.447016 kubelet[2663]: W0517 00:09:36.446956 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.447016 kubelet[2663]: E0517 00:09:36.446975 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.447326 kubelet[2663]: E0517 00:09:36.447266 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.447326 kubelet[2663]: W0517 00:09:36.447278 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.447326 kubelet[2663]: E0517 00:09:36.447288 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.447756 kubelet[2663]: E0517 00:09:36.447652 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.447756 kubelet[2663]: W0517 00:09:36.447663 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.447756 kubelet[2663]: E0517 00:09:36.447676 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.448072 kubelet[2663]: E0517 00:09:36.447988 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.448072 kubelet[2663]: W0517 00:09:36.448000 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.448072 kubelet[2663]: E0517 00:09:36.448011 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.448351 kubelet[2663]: E0517 00:09:36.448272 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.448351 kubelet[2663]: W0517 00:09:36.448282 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.448351 kubelet[2663]: E0517 00:09:36.448291 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.448672 kubelet[2663]: E0517 00:09:36.448587 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.448672 kubelet[2663]: W0517 00:09:36.448598 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.448672 kubelet[2663]: E0517 00:09:36.448628 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.449027 kubelet[2663]: E0517 00:09:36.448923 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.449027 kubelet[2663]: W0517 00:09:36.448934 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.449027 kubelet[2663]: E0517 00:09:36.448944 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.449416 kubelet[2663]: E0517 00:09:36.449358 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.449416 kubelet[2663]: W0517 00:09:36.449369 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.449416 kubelet[2663]: E0517 00:09:36.449380 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.450105 kubelet[2663]: E0517 00:09:36.449970 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.450105 kubelet[2663]: W0517 00:09:36.449983 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.450105 kubelet[2663]: E0517 00:09:36.449994 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.451641 kubelet[2663]: E0517 00:09:36.450321 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.451641 kubelet[2663]: W0517 00:09:36.450332 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.451641 kubelet[2663]: E0517 00:09:36.450342 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.452044 kubelet[2663]: E0517 00:09:36.451957 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.452044 kubelet[2663]: W0517 00:09:36.451972 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.452044 kubelet[2663]: E0517 00:09:36.451984 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.452398 kubelet[2663]: E0517 00:09:36.452386 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.452670 kubelet[2663]: W0517 00:09:36.452541 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.452670 kubelet[2663]: E0517 00:09:36.452558 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.452982 kubelet[2663]: E0517 00:09:36.452938 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.452982 kubelet[2663]: W0517 00:09:36.452951 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.452982 kubelet[2663]: E0517 00:09:36.452961 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.453843 kubelet[2663]: E0517 00:09:36.453731 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.453843 kubelet[2663]: W0517 00:09:36.453745 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.453843 kubelet[2663]: E0517 00:09:36.453756 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.454214 kubelet[2663]: E0517 00:09:36.454114 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.454214 kubelet[2663]: W0517 00:09:36.454137 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.454214 kubelet[2663]: E0517 00:09:36.454149 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.457821 kubelet[2663]: E0517 00:09:36.457701 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.457821 kubelet[2663]: W0517 00:09:36.457720 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.457821 kubelet[2663]: E0517 00:09:36.457741 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.457821 kubelet[2663]: I0517 00:09:36.457771 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bf7f5509-6eae-41c9-a82c-194eb8fdf825-socket-dir\") pod \"csi-node-driver-g9lf2\" (UID: \"bf7f5509-6eae-41c9-a82c-194eb8fdf825\") " pod="calico-system/csi-node-driver-g9lf2" May 17 00:09:36.458257 kubelet[2663]: E0517 00:09:36.458169 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.458257 kubelet[2663]: W0517 00:09:36.458185 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.458257 kubelet[2663]: E0517 00:09:36.458196 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.458257 kubelet[2663]: I0517 00:09:36.458224 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bf7f5509-6eae-41c9-a82c-194eb8fdf825-varrun\") pod \"csi-node-driver-g9lf2\" (UID: \"bf7f5509-6eae-41c9-a82c-194eb8fdf825\") " pod="calico-system/csi-node-driver-g9lf2" May 17 00:09:36.460575 kubelet[2663]: E0517 00:09:36.460546 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.460575 kubelet[2663]: W0517 00:09:36.460568 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.460575 kubelet[2663]: E0517 00:09:36.460582 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.461579 kubelet[2663]: E0517 00:09:36.461558 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.461579 kubelet[2663]: W0517 00:09:36.461574 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.461736 kubelet[2663]: E0517 00:09:36.461587 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.461846 kubelet[2663]: E0517 00:09:36.461831 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.461846 kubelet[2663]: W0517 00:09:36.461842 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.462046 kubelet[2663]: E0517 00:09:36.461853 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.462046 kubelet[2663]: I0517 00:09:36.461981 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6nsw\" (UniqueName: \"kubernetes.io/projected/bf7f5509-6eae-41c9-a82c-194eb8fdf825-kube-api-access-g6nsw\") pod \"csi-node-driver-g9lf2\" (UID: \"bf7f5509-6eae-41c9-a82c-194eb8fdf825\") " pod="calico-system/csi-node-driver-g9lf2" May 17 00:09:36.462277 kubelet[2663]: E0517 00:09:36.462260 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.462277 kubelet[2663]: W0517 00:09:36.462273 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.462372 kubelet[2663]: E0517 00:09:36.462288 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.462495 kubelet[2663]: E0517 00:09:36.462479 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.462495 kubelet[2663]: W0517 00:09:36.462491 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.462731 kubelet[2663]: E0517 00:09:36.462502 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.462824 kubelet[2663]: E0517 00:09:36.462807 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.462824 kubelet[2663]: W0517 00:09:36.462820 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.462941 kubelet[2663]: E0517 00:09:36.462830 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.462941 kubelet[2663]: I0517 00:09:36.462853 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bf7f5509-6eae-41c9-a82c-194eb8fdf825-registration-dir\") pod \"csi-node-driver-g9lf2\" (UID: \"bf7f5509-6eae-41c9-a82c-194eb8fdf825\") " pod="calico-system/csi-node-driver-g9lf2" May 17 00:09:36.463053 kubelet[2663]: E0517 00:09:36.463037 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.463053 kubelet[2663]: W0517 00:09:36.463049 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.463112 kubelet[2663]: E0517 00:09:36.463058 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.463112 kubelet[2663]: I0517 00:09:36.463078 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf7f5509-6eae-41c9-a82c-194eb8fdf825-kubelet-dir\") pod \"csi-node-driver-g9lf2\" (UID: \"bf7f5509-6eae-41c9-a82c-194eb8fdf825\") " pod="calico-system/csi-node-driver-g9lf2" May 17 00:09:36.463554 kubelet[2663]: E0517 00:09:36.463533 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.463554 kubelet[2663]: W0517 00:09:36.463549 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.463695 kubelet[2663]: E0517 00:09:36.463560 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.464137 kubelet[2663]: E0517 00:09:36.464117 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.464137 kubelet[2663]: W0517 00:09:36.464132 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.464282 kubelet[2663]: E0517 00:09:36.464145 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.464853 kubelet[2663]: E0517 00:09:36.464830 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.464853 kubelet[2663]: W0517 00:09:36.464846 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.465006 kubelet[2663]: E0517 00:09:36.464860 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.465355 kubelet[2663]: E0517 00:09:36.465305 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.465355 kubelet[2663]: W0517 00:09:36.465321 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.465355 kubelet[2663]: E0517 00:09:36.465334 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.466780 kubelet[2663]: E0517 00:09:36.466747 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.466780 kubelet[2663]: W0517 00:09:36.466769 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.467115 kubelet[2663]: E0517 00:09:36.466786 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.467189 kubelet[2663]: E0517 00:09:36.467172 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.467189 kubelet[2663]: W0517 00:09:36.467188 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.467260 kubelet[2663]: E0517 00:09:36.467199 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.527825 containerd[1478]: time="2025-05-17T00:09:36.527782438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qvzs6,Uid:d794f1db-00ed-4dd4-8f9d-aa17550c9e80,Namespace:calico-system,Attempt:0,}" May 17 00:09:36.563467 containerd[1478]: time="2025-05-17T00:09:36.562722139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:36.563467 containerd[1478]: time="2025-05-17T00:09:36.562779780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:36.563467 containerd[1478]: time="2025-05-17T00:09:36.562790700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:36.563467 containerd[1478]: time="2025-05-17T00:09:36.562881501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:36.564587 kubelet[2663]: E0517 00:09:36.564331 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.564587 kubelet[2663]: W0517 00:09:36.564352 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.564587 kubelet[2663]: E0517 00:09:36.564373 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.565772 kubelet[2663]: E0517 00:09:36.565534 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.565772 kubelet[2663]: W0517 00:09:36.565553 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.565772 kubelet[2663]: E0517 00:09:36.565578 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.566505 kubelet[2663]: E0517 00:09:36.566252 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.566505 kubelet[2663]: W0517 00:09:36.566266 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.566505 kubelet[2663]: E0517 00:09:36.566279 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.567046 kubelet[2663]: E0517 00:09:36.566886 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.567046 kubelet[2663]: W0517 00:09:36.566900 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.567046 kubelet[2663]: E0517 00:09:36.566911 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.569011 kubelet[2663]: E0517 00:09:36.568983 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.569011 kubelet[2663]: W0517 00:09:36.569006 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.569120 kubelet[2663]: E0517 00:09:36.569019 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.569544 kubelet[2663]: E0517 00:09:36.569524 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.569705 kubelet[2663]: W0517 00:09:36.569540 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.569705 kubelet[2663]: E0517 00:09:36.569564 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.570055 kubelet[2663]: E0517 00:09:36.570037 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.570055 kubelet[2663]: W0517 00:09:36.570052 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.570529 kubelet[2663]: E0517 00:09:36.570075 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.570725 kubelet[2663]: E0517 00:09:36.570705 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.570780 kubelet[2663]: W0517 00:09:36.570722 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.570780 kubelet[2663]: E0517 00:09:36.570746 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.570951 kubelet[2663]: E0517 00:09:36.570938 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.570951 kubelet[2663]: W0517 00:09:36.570948 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.571040 kubelet[2663]: E0517 00:09:36.570960 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.572078 kubelet[2663]: E0517 00:09:36.572053 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.572078 kubelet[2663]: W0517 00:09:36.572070 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.572302 kubelet[2663]: E0517 00:09:36.572087 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.572405 kubelet[2663]: E0517 00:09:36.572387 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.572405 kubelet[2663]: W0517 00:09:36.572401 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.572659 kubelet[2663]: E0517 00:09:36.572412 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.574512 kubelet[2663]: E0517 00:09:36.573402 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.574512 kubelet[2663]: W0517 00:09:36.573426 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.574512 kubelet[2663]: E0517 00:09:36.573462 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.574771 kubelet[2663]: E0517 00:09:36.574536 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.574771 kubelet[2663]: W0517 00:09:36.574551 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.574771 kubelet[2663]: E0517 00:09:36.574567 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.575005 kubelet[2663]: E0517 00:09:36.574881 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.575005 kubelet[2663]: W0517 00:09:36.574902 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.575005 kubelet[2663]: E0517 00:09:36.574914 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.575659 kubelet[2663]: E0517 00:09:36.575637 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.575659 kubelet[2663]: W0517 00:09:36.575652 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.575659 kubelet[2663]: E0517 00:09:36.575664 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.576546 kubelet[2663]: E0517 00:09:36.576522 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.576546 kubelet[2663]: W0517 00:09:36.576539 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.577032 kubelet[2663]: E0517 00:09:36.576561 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.577032 kubelet[2663]: E0517 00:09:36.576768 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.577032 kubelet[2663]: W0517 00:09:36.576776 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.577032 kubelet[2663]: E0517 00:09:36.576785 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.577032 kubelet[2663]: E0517 00:09:36.576948 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.577032 kubelet[2663]: W0517 00:09:36.576955 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.577032 kubelet[2663]: E0517 00:09:36.576963 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.579656 kubelet[2663]: E0517 00:09:36.579631 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.579656 kubelet[2663]: W0517 00:09:36.579646 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.579656 kubelet[2663]: E0517 00:09:36.579659 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.581242 kubelet[2663]: E0517 00:09:36.581207 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.581242 kubelet[2663]: W0517 00:09:36.581228 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.581242 kubelet[2663]: E0517 00:09:36.581241 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.581758 kubelet[2663]: E0517 00:09:36.581734 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.581758 kubelet[2663]: W0517 00:09:36.581751 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.581758 kubelet[2663]: E0517 00:09:36.581764 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.582059 kubelet[2663]: E0517 00:09:36.582036 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.582059 kubelet[2663]: W0517 00:09:36.582053 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.582126 kubelet[2663]: E0517 00:09:36.582063 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.585396 kubelet[2663]: E0517 00:09:36.584534 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.585396 kubelet[2663]: W0517 00:09:36.584554 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.585396 kubelet[2663]: E0517 00:09:36.584585 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.585396 kubelet[2663]: E0517 00:09:36.584850 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.585396 kubelet[2663]: W0517 00:09:36.584858 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.585396 kubelet[2663]: E0517 00:09:36.584868 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.585396 kubelet[2663]: E0517 00:09:36.585119 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.585396 kubelet[2663]: W0517 00:09:36.585129 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.585396 kubelet[2663]: E0517 00:09:36.585139 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.599660 systemd[1]: Started cri-containerd-fc06f5e5ecf04bc3cf9988ddda034841d7630ba2df6f32e74ad1b676dfbe1305.scope - libcontainer container fc06f5e5ecf04bc3cf9988ddda034841d7630ba2df6f32e74ad1b676dfbe1305. May 17 00:09:36.618768 kubelet[2663]: E0517 00:09:36.618379 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:09:36.618768 kubelet[2663]: W0517 00:09:36.618404 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:09:36.618768 kubelet[2663]: E0517 00:09:36.618426 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:09:36.728466 containerd[1478]: time="2025-05-17T00:09:36.727915934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qvzs6,Uid:d794f1db-00ed-4dd4-8f9d-aa17550c9e80,Namespace:calico-system,Attempt:0,} returns sandbox id \"fc06f5e5ecf04bc3cf9988ddda034841d7630ba2df6f32e74ad1b676dfbe1305\"" May 17 00:09:36.732697 containerd[1478]: time="2025-05-17T00:09:36.732562566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 00:09:36.734417 containerd[1478]: time="2025-05-17T00:09:36.733951707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79c79c5d94-cdcvv,Uid:15ab2e96-ec8e-45d0-a9fa-fe53fc76a6ab,Namespace:calico-system,Attempt:0,} returns sandbox id \"fb20c80fdbb0cd62b327a191034ec7ac1f760ca0acd9ffe5cfa16d5f66bf4447\"" May 17 00:09:38.119415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1110140794.mount: Deactivated successfully. May 17 00:09:38.249068 containerd[1478]: time="2025-05-17T00:09:38.248982705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:38.250133 containerd[1478]: time="2025-05-17T00:09:38.250082323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=5633683" May 17 00:09:38.251212 containerd[1478]: time="2025-05-17T00:09:38.251146339Z" level=info msg="ImageCreate event name:\"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:38.255336 containerd[1478]: time="2025-05-17T00:09:38.253958384Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:38.255336 containerd[1478]: time="2025-05-17T00:09:38.254927199Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5633505\" in 1.522321033s" May 17 00:09:38.255336 containerd[1478]: time="2025-05-17T00:09:38.254959599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\"" May 17 00:09:38.256413 containerd[1478]: time="2025-05-17T00:09:38.256372142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 17 00:09:38.260758 containerd[1478]: time="2025-05-17T00:09:38.260709050Z" level=info msg="CreateContainer within sandbox \"fc06f5e5ecf04bc3cf9988ddda034841d7630ba2df6f32e74ad1b676dfbe1305\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 00:09:38.281413 containerd[1478]: time="2025-05-17T00:09:38.281350055Z" level=info msg="CreateContainer within sandbox \"fc06f5e5ecf04bc3cf9988ddda034841d7630ba2df6f32e74ad1b676dfbe1305\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"adac1628060abaa313daa3f5b2b4f21aecd1071e488e87e8dbe0ce119e3034e8\"" May 17 00:09:38.283184 containerd[1478]: time="2025-05-17T00:09:38.282996361Z" level=info msg="StartContainer for \"adac1628060abaa313daa3f5b2b4f21aecd1071e488e87e8dbe0ce119e3034e8\"" May 17 00:09:38.322907 systemd[1]: Started cri-containerd-adac1628060abaa313daa3f5b2b4f21aecd1071e488e87e8dbe0ce119e3034e8.scope - libcontainer container adac1628060abaa313daa3f5b2b4f21aecd1071e488e87e8dbe0ce119e3034e8. May 17 00:09:38.360537 containerd[1478]: time="2025-05-17T00:09:38.360361178Z" level=info msg="StartContainer for \"adac1628060abaa313daa3f5b2b4f21aecd1071e488e87e8dbe0ce119e3034e8\" returns successfully" May 17 00:09:38.376946 systemd[1]: cri-containerd-adac1628060abaa313daa3f5b2b4f21aecd1071e488e87e8dbe0ce119e3034e8.scope: Deactivated successfully. May 17 00:09:38.410427 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-adac1628060abaa313daa3f5b2b4f21aecd1071e488e87e8dbe0ce119e3034e8-rootfs.mount: Deactivated successfully. May 17 00:09:38.479488 containerd[1478]: time="2025-05-17T00:09:38.479097767Z" level=info msg="shim disconnected" id=adac1628060abaa313daa3f5b2b4f21aecd1071e488e87e8dbe0ce119e3034e8 namespace=k8s.io May 17 00:09:38.479488 containerd[1478]: time="2025-05-17T00:09:38.479187768Z" level=warning msg="cleaning up after shim disconnected" id=adac1628060abaa313daa3f5b2b4f21aecd1071e488e87e8dbe0ce119e3034e8 namespace=k8s.io May 17 00:09:38.479488 containerd[1478]: time="2025-05-17T00:09:38.479221408Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:09:38.519409 kubelet[2663]: E0517 00:09:38.519275 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g9lf2" podUID="bf7f5509-6eae-41c9-a82c-194eb8fdf825" May 17 00:09:40.061593 containerd[1478]: time="2025-05-17T00:09:40.061524211Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:40.062872 containerd[1478]: time="2025-05-17T00:09:40.062830432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=31650890" May 17 00:09:40.064042 containerd[1478]: time="2025-05-17T00:09:40.063964210Z" level=info msg="ImageCreate event name:\"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:40.066862 containerd[1478]: time="2025-05-17T00:09:40.066809175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:40.067532 containerd[1478]: time="2025-05-17T00:09:40.067376424Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"33020123\" in 1.810963962s" May 17 00:09:40.067532 containerd[1478]: time="2025-05-17T00:09:40.067410385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\"" May 17 00:09:40.069428 containerd[1478]: time="2025-05-17T00:09:40.068950369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 00:09:40.084912 containerd[1478]: time="2025-05-17T00:09:40.084830863Z" level=info msg="CreateContainer within sandbox \"fb20c80fdbb0cd62b327a191034ec7ac1f760ca0acd9ffe5cfa16d5f66bf4447\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 17 00:09:40.107684 containerd[1478]: time="2025-05-17T00:09:40.107613507Z" level=info msg="CreateContainer within sandbox \"fb20c80fdbb0cd62b327a191034ec7ac1f760ca0acd9ffe5cfa16d5f66bf4447\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"46598223a3d0628161753a7f141afa9764d8393555735b52995ecb3b0d458535\"" May 17 00:09:40.108924 containerd[1478]: time="2025-05-17T00:09:40.108822727Z" level=info msg="StartContainer for \"46598223a3d0628161753a7f141afa9764d8393555735b52995ecb3b0d458535\"" May 17 00:09:40.147749 systemd[1]: Started cri-containerd-46598223a3d0628161753a7f141afa9764d8393555735b52995ecb3b0d458535.scope - libcontainer container 46598223a3d0628161753a7f141afa9764d8393555735b52995ecb3b0d458535. May 17 00:09:40.193712 containerd[1478]: time="2025-05-17T00:09:40.192219380Z" level=info msg="StartContainer for \"46598223a3d0628161753a7f141afa9764d8393555735b52995ecb3b0d458535\" returns successfully" May 17 00:09:40.519381 kubelet[2663]: E0517 00:09:40.519051 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g9lf2" podUID="bf7f5509-6eae-41c9-a82c-194eb8fdf825" May 17 00:09:41.635687 kubelet[2663]: I0517 00:09:41.635619 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:09:42.228748 kubelet[2663]: I0517 00:09:42.228671 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-79c79c5d94-cdcvv" podStartSLOduration=2.898724207 podStartE2EDuration="6.22863611s" podCreationTimestamp="2025-05-17 00:09:36 +0000 UTC" firstStartedPulling="2025-05-17 00:09:36.738884384 +0000 UTC m=+21.328727037" lastFinishedPulling="2025-05-17 00:09:40.068796287 +0000 UTC m=+24.658638940" observedRunningTime="2025-05-17 00:09:40.655177341 +0000 UTC m=+25.245019994" watchObservedRunningTime="2025-05-17 00:09:42.22863611 +0000 UTC m=+26.818478763" May 17 00:09:42.519412 kubelet[2663]: E0517 00:09:42.519303 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g9lf2" podUID="bf7f5509-6eae-41c9-a82c-194eb8fdf825" May 17 00:09:42.894856 containerd[1478]: time="2025-05-17T00:09:42.894389510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:42.895809 containerd[1478]: time="2025-05-17T00:09:42.895750732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=65748976" May 17 00:09:42.896817 containerd[1478]: time="2025-05-17T00:09:42.896757148Z" level=info msg="ImageCreate event name:\"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:42.899490 containerd[1478]: time="2025-05-17T00:09:42.899426352Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:42.900774 containerd[1478]: time="2025-05-17T00:09:42.900615171Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"67118217\" in 2.831630241s" May 17 00:09:42.900774 containerd[1478]: time="2025-05-17T00:09:42.900655811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\"" May 17 00:09:42.906431 containerd[1478]: time="2025-05-17T00:09:42.906161541Z" level=info msg="CreateContainer within sandbox \"fc06f5e5ecf04bc3cf9988ddda034841d7630ba2df6f32e74ad1b676dfbe1305\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 00:09:42.925111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4049377460.mount: Deactivated successfully. May 17 00:09:42.925796 containerd[1478]: time="2025-05-17T00:09:42.925267411Z" level=info msg="CreateContainer within sandbox \"fc06f5e5ecf04bc3cf9988ddda034841d7630ba2df6f32e74ad1b676dfbe1305\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"238f18b72d27847205a51884938f06a49c5d643ae80575781195337d06c885b3\"" May 17 00:09:42.928170 containerd[1478]: time="2025-05-17T00:09:42.928131537Z" level=info msg="StartContainer for \"238f18b72d27847205a51884938f06a49c5d643ae80575781195337d06c885b3\"" May 17 00:09:42.965647 systemd[1]: Started cri-containerd-238f18b72d27847205a51884938f06a49c5d643ae80575781195337d06c885b3.scope - libcontainer container 238f18b72d27847205a51884938f06a49c5d643ae80575781195337d06c885b3. May 17 00:09:42.996695 containerd[1478]: time="2025-05-17T00:09:42.996628768Z" level=info msg="StartContainer for \"238f18b72d27847205a51884938f06a49c5d643ae80575781195337d06c885b3\" returns successfully" May 17 00:09:43.463077 containerd[1478]: time="2025-05-17T00:09:43.463011666Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:09:43.466722 systemd[1]: cri-containerd-238f18b72d27847205a51884938f06a49c5d643ae80575781195337d06c885b3.scope: Deactivated successfully. May 17 00:09:43.488355 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-238f18b72d27847205a51884938f06a49c5d643ae80575781195337d06c885b3-rootfs.mount: Deactivated successfully. May 17 00:09:43.498734 kubelet[2663]: I0517 00:09:43.495070 2663 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 17 00:09:43.613661 systemd[1]: Created slice kubepods-burstable-podb9015c7c_53fd_4356_9465_8e7b4b00eae6.slice - libcontainer container kubepods-burstable-podb9015c7c_53fd_4356_9465_8e7b4b00eae6.slice. May 17 00:09:43.617342 containerd[1478]: time="2025-05-17T00:09:43.617172784Z" level=info msg="shim disconnected" id=238f18b72d27847205a51884938f06a49c5d643ae80575781195337d06c885b3 namespace=k8s.io May 17 00:09:43.617342 containerd[1478]: time="2025-05-17T00:09:43.617235065Z" level=warning msg="cleaning up after shim disconnected" id=238f18b72d27847205a51884938f06a49c5d643ae80575781195337d06c885b3 namespace=k8s.io May 17 00:09:43.617342 containerd[1478]: time="2025-05-17T00:09:43.617244066Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:09:43.629431 kubelet[2663]: I0517 00:09:43.629203 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9015c7c-53fd-4356-9465-8e7b4b00eae6-config-volume\") pod \"coredns-674b8bbfcf-5rmrz\" (UID: \"b9015c7c-53fd-4356-9465-8e7b4b00eae6\") " pod="kube-system/coredns-674b8bbfcf-5rmrz" May 17 00:09:43.629431 kubelet[2663]: I0517 00:09:43.629241 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4kmw\" (UniqueName: \"kubernetes.io/projected/b9015c7c-53fd-4356-9465-8e7b4b00eae6-kube-api-access-x4kmw\") pod \"coredns-674b8bbfcf-5rmrz\" (UID: \"b9015c7c-53fd-4356-9465-8e7b4b00eae6\") " pod="kube-system/coredns-674b8bbfcf-5rmrz" May 17 00:09:43.633724 systemd[1]: Created slice kubepods-besteffort-pode26e8c4a_c5e6_41d8_924e_19ce27b5f4cd.slice - libcontainer container kubepods-besteffort-pode26e8c4a_c5e6_41d8_924e_19ce27b5f4cd.slice. May 17 00:09:43.643229 systemd[1]: Created slice kubepods-besteffort-podcc5a0d8a_21f4_4d9d_bd13_f95a58141562.slice - libcontainer container kubepods-besteffort-podcc5a0d8a_21f4_4d9d_bd13_f95a58141562.slice. May 17 00:09:43.657920 systemd[1]: Created slice kubepods-besteffort-pod9be07d70_67ab_4460_b534_d03025d5bc29.slice - libcontainer container kubepods-besteffort-pod9be07d70_67ab_4460_b534_d03025d5bc29.slice. May 17 00:09:43.675190 systemd[1]: Created slice kubepods-burstable-pod6123d484_fe1b_4bcd_a0b1_5639f75795e9.slice - libcontainer container kubepods-burstable-pod6123d484_fe1b_4bcd_a0b1_5639f75795e9.slice. May 17 00:09:43.684815 systemd[1]: Created slice kubepods-besteffort-pod5021f4d9_7c3c_411f_be83_7378c09cd50b.slice - libcontainer container kubepods-besteffort-pod5021f4d9_7c3c_411f_be83_7378c09cd50b.slice. May 17 00:09:43.695325 systemd[1]: Created slice kubepods-besteffort-pod662625c5_921c_406a_9ef1_d2e70e33e339.slice - libcontainer container kubepods-besteffort-pod662625c5_921c_406a_9ef1_d2e70e33e339.slice. May 17 00:09:43.732614 kubelet[2663]: I0517 00:09:43.731515 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cc5a0d8a-21f4-4d9d-bd13-f95a58141562-calico-apiserver-certs\") pod \"calico-apiserver-654599565b-85g84\" (UID: \"cc5a0d8a-21f4-4d9d-bd13-f95a58141562\") " pod="calico-apiserver/calico-apiserver-654599565b-85g84" May 17 00:09:43.736042 kubelet[2663]: I0517 00:09:43.731691 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/662625c5-921c-406a-9ef1-d2e70e33e339-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-gp74j\" (UID: \"662625c5-921c-406a-9ef1-d2e70e33e339\") " pod="calico-system/goldmane-78d55f7ddc-gp74j" May 17 00:09:43.736042 kubelet[2663]: I0517 00:09:43.733021 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhfxv\" (UniqueName: \"kubernetes.io/projected/662625c5-921c-406a-9ef1-d2e70e33e339-kube-api-access-jhfxv\") pod \"goldmane-78d55f7ddc-gp74j\" (UID: \"662625c5-921c-406a-9ef1-d2e70e33e339\") " pod="calico-system/goldmane-78d55f7ddc-gp74j" May 17 00:09:43.736042 kubelet[2663]: I0517 00:09:43.733131 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6123d484-fe1b-4bcd-a0b1-5639f75795e9-config-volume\") pod \"coredns-674b8bbfcf-2pp2r\" (UID: \"6123d484-fe1b-4bcd-a0b1-5639f75795e9\") " pod="kube-system/coredns-674b8bbfcf-2pp2r" May 17 00:09:43.736042 kubelet[2663]: I0517 00:09:43.733165 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb7bh\" (UniqueName: \"kubernetes.io/projected/6123d484-fe1b-4bcd-a0b1-5639f75795e9-kube-api-access-kb7bh\") pod \"coredns-674b8bbfcf-2pp2r\" (UID: \"6123d484-fe1b-4bcd-a0b1-5639f75795e9\") " pod="kube-system/coredns-674b8bbfcf-2pp2r" May 17 00:09:43.736042 kubelet[2663]: I0517 00:09:43.733195 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf8rm\" (UniqueName: \"kubernetes.io/projected/cc5a0d8a-21f4-4d9d-bd13-f95a58141562-kube-api-access-jf8rm\") pod \"calico-apiserver-654599565b-85g84\" (UID: \"cc5a0d8a-21f4-4d9d-bd13-f95a58141562\") " pod="calico-apiserver/calico-apiserver-654599565b-85g84" May 17 00:09:43.736276 kubelet[2663]: I0517 00:09:43.733248 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9be07d70-67ab-4460-b534-d03025d5bc29-whisker-ca-bundle\") pod \"whisker-b858dbd84-d62lt\" (UID: \"9be07d70-67ab-4460-b534-d03025d5bc29\") " pod="calico-system/whisker-b858dbd84-d62lt" May 17 00:09:43.736276 kubelet[2663]: I0517 00:09:43.733280 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmbfg\" (UniqueName: \"kubernetes.io/projected/5021f4d9-7c3c-411f-be83-7378c09cd50b-kube-api-access-lmbfg\") pod \"calico-apiserver-654599565b-lpjvt\" (UID: \"5021f4d9-7c3c-411f-be83-7378c09cd50b\") " pod="calico-apiserver/calico-apiserver-654599565b-lpjvt" May 17 00:09:43.736276 kubelet[2663]: I0517 00:09:43.733327 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e26e8c4a-c5e6-41d8-924e-19ce27b5f4cd-tigera-ca-bundle\") pod \"calico-kube-controllers-645f967f78-7knkm\" (UID: \"e26e8c4a-c5e6-41d8-924e-19ce27b5f4cd\") " pod="calico-system/calico-kube-controllers-645f967f78-7knkm" May 17 00:09:43.736276 kubelet[2663]: I0517 00:09:43.733357 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9be07d70-67ab-4460-b534-d03025d5bc29-whisker-backend-key-pair\") pod \"whisker-b858dbd84-d62lt\" (UID: \"9be07d70-67ab-4460-b534-d03025d5bc29\") " pod="calico-system/whisker-b858dbd84-d62lt" May 17 00:09:43.736276 kubelet[2663]: I0517 00:09:43.733389 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rjhb\" (UniqueName: \"kubernetes.io/projected/9be07d70-67ab-4460-b534-d03025d5bc29-kube-api-access-2rjhb\") pod \"whisker-b858dbd84-d62lt\" (UID: \"9be07d70-67ab-4460-b534-d03025d5bc29\") " pod="calico-system/whisker-b858dbd84-d62lt" May 17 00:09:43.737479 kubelet[2663]: I0517 00:09:43.733422 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5021f4d9-7c3c-411f-be83-7378c09cd50b-calico-apiserver-certs\") pod \"calico-apiserver-654599565b-lpjvt\" (UID: \"5021f4d9-7c3c-411f-be83-7378c09cd50b\") " pod="calico-apiserver/calico-apiserver-654599565b-lpjvt" May 17 00:09:43.737479 kubelet[2663]: I0517 00:09:43.733480 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/662625c5-921c-406a-9ef1-d2e70e33e339-config\") pod \"goldmane-78d55f7ddc-gp74j\" (UID: \"662625c5-921c-406a-9ef1-d2e70e33e339\") " pod="calico-system/goldmane-78d55f7ddc-gp74j" May 17 00:09:43.737479 kubelet[2663]: I0517 00:09:43.733512 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/662625c5-921c-406a-9ef1-d2e70e33e339-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-gp74j\" (UID: \"662625c5-921c-406a-9ef1-d2e70e33e339\") " pod="calico-system/goldmane-78d55f7ddc-gp74j" May 17 00:09:43.737479 kubelet[2663]: I0517 00:09:43.733631 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nj4w\" (UniqueName: \"kubernetes.io/projected/e26e8c4a-c5e6-41d8-924e-19ce27b5f4cd-kube-api-access-6nj4w\") pod \"calico-kube-controllers-645f967f78-7knkm\" (UID: \"e26e8c4a-c5e6-41d8-924e-19ce27b5f4cd\") " pod="calico-system/calico-kube-controllers-645f967f78-7knkm" May 17 00:09:43.923743 containerd[1478]: time="2025-05-17T00:09:43.923695351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5rmrz,Uid:b9015c7c-53fd-4356-9465-8e7b4b00eae6,Namespace:kube-system,Attempt:0,}" May 17 00:09:43.938365 containerd[1478]: time="2025-05-17T00:09:43.937840742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-645f967f78-7knkm,Uid:e26e8c4a-c5e6-41d8-924e-19ce27b5f4cd,Namespace:calico-system,Attempt:0,}" May 17 00:09:43.952838 containerd[1478]: time="2025-05-17T00:09:43.952372500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654599565b-85g84,Uid:cc5a0d8a-21f4-4d9d-bd13-f95a58141562,Namespace:calico-apiserver,Attempt:0,}" May 17 00:09:43.967626 containerd[1478]: time="2025-05-17T00:09:43.967585988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b858dbd84-d62lt,Uid:9be07d70-67ab-4460-b534-d03025d5bc29,Namespace:calico-system,Attempt:0,}" May 17 00:09:43.982009 containerd[1478]: time="2025-05-17T00:09:43.981963903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2pp2r,Uid:6123d484-fe1b-4bcd-a0b1-5639f75795e9,Namespace:kube-system,Attempt:0,}" May 17 00:09:43.992336 containerd[1478]: time="2025-05-17T00:09:43.991602901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654599565b-lpjvt,Uid:5021f4d9-7c3c-411f-be83-7378c09cd50b,Namespace:calico-apiserver,Attempt:0,}" May 17 00:09:44.002913 containerd[1478]: time="2025-05-17T00:09:44.002540559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-gp74j,Uid:662625c5-921c-406a-9ef1-d2e70e33e339,Namespace:calico-system,Attempt:0,}" May 17 00:09:44.166035 containerd[1478]: time="2025-05-17T00:09:44.165981927Z" level=error msg="Failed to destroy network for sandbox \"acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.167415 containerd[1478]: time="2025-05-17T00:09:44.166567697Z" level=error msg="encountered an error cleaning up failed sandbox \"acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.167415 containerd[1478]: time="2025-05-17T00:09:44.166633978Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5rmrz,Uid:b9015c7c-53fd-4356-9465-8e7b4b00eae6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.168307 kubelet[2663]: E0517 00:09:44.166950 2663 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.168307 kubelet[2663]: E0517 00:09:44.167016 2663 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5rmrz" May 17 00:09:44.168307 kubelet[2663]: E0517 00:09:44.167052 2663 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5rmrz" May 17 00:09:44.168444 kubelet[2663]: E0517 00:09:44.167098 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-5rmrz_kube-system(b9015c7c-53fd-4356-9465-8e7b4b00eae6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-5rmrz_kube-system(b9015c7c-53fd-4356-9465-8e7b4b00eae6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-5rmrz" podUID="b9015c7c-53fd-4356-9465-8e7b4b00eae6" May 17 00:09:44.179734 containerd[1478]: time="2025-05-17T00:09:44.179655272Z" level=error msg="Failed to destroy network for sandbox \"80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.192050 containerd[1478]: time="2025-05-17T00:09:44.191949394Z" level=error msg="Failed to destroy network for sandbox \"6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.193605 containerd[1478]: time="2025-05-17T00:09:44.192550244Z" level=error msg="encountered an error cleaning up failed sandbox \"80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.193605 containerd[1478]: time="2025-05-17T00:09:44.192646725Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-645f967f78-7knkm,Uid:e26e8c4a-c5e6-41d8-924e-19ce27b5f4cd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.193605 containerd[1478]: time="2025-05-17T00:09:44.192983411Z" level=error msg="encountered an error cleaning up failed sandbox \"6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.193605 containerd[1478]: time="2025-05-17T00:09:44.193023772Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654599565b-85g84,Uid:cc5a0d8a-21f4-4d9d-bd13-f95a58141562,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.193866 kubelet[2663]: E0517 00:09:44.193250 2663 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.193866 kubelet[2663]: E0517 00:09:44.193317 2663 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-654599565b-85g84" May 17 00:09:44.193866 kubelet[2663]: E0517 00:09:44.193339 2663 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-654599565b-85g84" May 17 00:09:44.193955 kubelet[2663]: E0517 00:09:44.193387 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-654599565b-85g84_calico-apiserver(cc5a0d8a-21f4-4d9d-bd13-f95a58141562)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-654599565b-85g84_calico-apiserver(cc5a0d8a-21f4-4d9d-bd13-f95a58141562)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-654599565b-85g84" podUID="cc5a0d8a-21f4-4d9d-bd13-f95a58141562" May 17 00:09:44.193955 kubelet[2663]: E0517 00:09:44.193483 2663 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.193955 kubelet[2663]: E0517 00:09:44.193514 2663 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-645f967f78-7knkm" May 17 00:09:44.194038 kubelet[2663]: E0517 00:09:44.193535 2663 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-645f967f78-7knkm" May 17 00:09:44.194038 kubelet[2663]: E0517 00:09:44.193564 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-645f967f78-7knkm_calico-system(e26e8c4a-c5e6-41d8-924e-19ce27b5f4cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-645f967f78-7knkm_calico-system(e26e8c4a-c5e6-41d8-924e-19ce27b5f4cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-645f967f78-7knkm" podUID="e26e8c4a-c5e6-41d8-924e-19ce27b5f4cd" May 17 00:09:44.216334 containerd[1478]: time="2025-05-17T00:09:44.216277354Z" level=error msg="Failed to destroy network for sandbox \"18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.217497 containerd[1478]: time="2025-05-17T00:09:44.217266290Z" level=error msg="encountered an error cleaning up failed sandbox \"18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.217889 containerd[1478]: time="2025-05-17T00:09:44.217742418Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654599565b-lpjvt,Uid:5021f4d9-7c3c-411f-be83-7378c09cd50b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.218647 kubelet[2663]: E0517 00:09:44.218524 2663 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.218647 kubelet[2663]: E0517 00:09:44.218609 2663 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-654599565b-lpjvt" May 17 00:09:44.218952 kubelet[2663]: E0517 00:09:44.218815 2663 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-654599565b-lpjvt" May 17 00:09:44.218952 kubelet[2663]: E0517 00:09:44.218917 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-654599565b-lpjvt_calico-apiserver(5021f4d9-7c3c-411f-be83-7378c09cd50b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-654599565b-lpjvt_calico-apiserver(5021f4d9-7c3c-411f-be83-7378c09cd50b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-654599565b-lpjvt" podUID="5021f4d9-7c3c-411f-be83-7378c09cd50b" May 17 00:09:44.242040 containerd[1478]: time="2025-05-17T00:09:44.241886455Z" level=error msg="Failed to destroy network for sandbox \"b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.242296 containerd[1478]: time="2025-05-17T00:09:44.242176820Z" level=error msg="Failed to destroy network for sandbox \"64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.245971 containerd[1478]: time="2025-05-17T00:09:44.242973993Z" level=error msg="encountered an error cleaning up failed sandbox \"64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.245971 containerd[1478]: time="2025-05-17T00:09:44.243041554Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2pp2r,Uid:6123d484-fe1b-4bcd-a0b1-5639f75795e9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.245971 containerd[1478]: time="2025-05-17T00:09:44.245661917Z" level=error msg="encountered an error cleaning up failed sandbox \"b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.245971 containerd[1478]: time="2025-05-17T00:09:44.245770719Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b858dbd84-d62lt,Uid:9be07d70-67ab-4460-b534-d03025d5bc29,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.246158 kubelet[2663]: E0517 00:09:44.243264 2663 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.246158 kubelet[2663]: E0517 00:09:44.243331 2663 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2pp2r" May 17 00:09:44.246158 kubelet[2663]: E0517 00:09:44.243352 2663 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2pp2r" May 17 00:09:44.246492 kubelet[2663]: E0517 00:09:44.243398 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-2pp2r_kube-system(6123d484-fe1b-4bcd-a0b1-5639f75795e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-2pp2r_kube-system(6123d484-fe1b-4bcd-a0b1-5639f75795e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-2pp2r" podUID="6123d484-fe1b-4bcd-a0b1-5639f75795e9" May 17 00:09:44.246586 kubelet[2663]: E0517 00:09:44.246503 2663 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.246618 kubelet[2663]: E0517 00:09:44.246581 2663 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b858dbd84-d62lt" May 17 00:09:44.246618 kubelet[2663]: E0517 00:09:44.246609 2663 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b858dbd84-d62lt" May 17 00:09:44.247342 kubelet[2663]: E0517 00:09:44.246836 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-b858dbd84-d62lt_calico-system(9be07d70-67ab-4460-b534-d03025d5bc29)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-b858dbd84-d62lt_calico-system(9be07d70-67ab-4460-b534-d03025d5bc29)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b858dbd84-d62lt" podUID="9be07d70-67ab-4460-b534-d03025d5bc29" May 17 00:09:44.260302 containerd[1478]: time="2025-05-17T00:09:44.260229717Z" level=error msg="Failed to destroy network for sandbox \"48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.261098 containerd[1478]: time="2025-05-17T00:09:44.261026970Z" level=error msg="encountered an error cleaning up failed sandbox \"48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.261293 containerd[1478]: time="2025-05-17T00:09:44.261269094Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-gp74j,Uid:662625c5-921c-406a-9ef1-d2e70e33e339,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.261785 kubelet[2663]: E0517 00:09:44.261706 2663 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.261939 kubelet[2663]: E0517 00:09:44.261805 2663 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-gp74j" May 17 00:09:44.261939 kubelet[2663]: E0517 00:09:44.261845 2663 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-gp74j" May 17 00:09:44.262163 kubelet[2663]: E0517 00:09:44.261927 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-gp74j_calico-system(662625c5-921c-406a-9ef1-d2e70e33e339)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-gp74j_calico-system(662625c5-921c-406a-9ef1-d2e70e33e339)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-gp74j" podUID="662625c5-921c-406a-9ef1-d2e70e33e339" May 17 00:09:44.526241 systemd[1]: Created slice kubepods-besteffort-podbf7f5509_6eae_41c9_a82c_194eb8fdf825.slice - libcontainer container kubepods-besteffort-podbf7f5509_6eae_41c9_a82c_194eb8fdf825.slice. May 17 00:09:44.530113 containerd[1478]: time="2025-05-17T00:09:44.530067114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g9lf2,Uid:bf7f5509-6eae-41c9-a82c-194eb8fdf825,Namespace:calico-system,Attempt:0,}" May 17 00:09:44.584870 containerd[1478]: time="2025-05-17T00:09:44.584813654Z" level=error msg="Failed to destroy network for sandbox \"405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.585251 containerd[1478]: time="2025-05-17T00:09:44.585217141Z" level=error msg="encountered an error cleaning up failed sandbox \"405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.585316 containerd[1478]: time="2025-05-17T00:09:44.585286022Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g9lf2,Uid:bf7f5509-6eae-41c9-a82c-194eb8fdf825,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.585667 kubelet[2663]: E0517 00:09:44.585556 2663 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.585667 kubelet[2663]: E0517 00:09:44.585621 2663 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g9lf2" May 17 00:09:44.585667 kubelet[2663]: E0517 00:09:44.585639 2663 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g9lf2" May 17 00:09:44.586519 kubelet[2663]: E0517 00:09:44.586079 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-g9lf2_calico-system(bf7f5509-6eae-41c9-a82c-194eb8fdf825)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-g9lf2_calico-system(bf7f5509-6eae-41c9-a82c-194eb8fdf825)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g9lf2" podUID="bf7f5509-6eae-41c9-a82c-194eb8fdf825" May 17 00:09:44.649231 kubelet[2663]: I0517 00:09:44.649138 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" May 17 00:09:44.651702 containerd[1478]: time="2025-05-17T00:09:44.651427589Z" level=info msg="StopPodSandbox for \"6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be\"" May 17 00:09:44.653462 kubelet[2663]: I0517 00:09:44.652490 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" May 17 00:09:44.653580 containerd[1478]: time="2025-05-17T00:09:44.652593849Z" level=info msg="Ensure that sandbox 6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be in task-service has been cleanup successfully" May 17 00:09:44.658640 containerd[1478]: time="2025-05-17T00:09:44.658389064Z" level=info msg="StopPodSandbox for \"405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9\"" May 17 00:09:44.659882 containerd[1478]: time="2025-05-17T00:09:44.659822247Z" level=info msg="Ensure that sandbox 405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9 in task-service has been cleanup successfully" May 17 00:09:44.660515 kubelet[2663]: I0517 00:09:44.660488 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" May 17 00:09:44.661958 containerd[1478]: time="2025-05-17T00:09:44.661385553Z" level=info msg="StopPodSandbox for \"b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f\"" May 17 00:09:44.661958 containerd[1478]: time="2025-05-17T00:09:44.661553356Z" level=info msg="Ensure that sandbox b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f in task-service has been cleanup successfully" May 17 00:09:44.671391 kubelet[2663]: I0517 00:09:44.671224 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" May 17 00:09:44.673571 containerd[1478]: time="2025-05-17T00:09:44.673433631Z" level=info msg="StopPodSandbox for \"acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633\"" May 17 00:09:44.674617 containerd[1478]: time="2025-05-17T00:09:44.674574170Z" level=info msg="Ensure that sandbox acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633 in task-service has been cleanup successfully" May 17 00:09:44.682017 containerd[1478]: time="2025-05-17T00:09:44.681978652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 00:09:44.697529 kubelet[2663]: I0517 00:09:44.696688 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" May 17 00:09:44.697961 containerd[1478]: time="2025-05-17T00:09:44.697910234Z" level=info msg="StopPodSandbox for \"18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a\"" May 17 00:09:44.698173 containerd[1478]: time="2025-05-17T00:09:44.698149318Z" level=info msg="Ensure that sandbox 18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a in task-service has been cleanup successfully" May 17 00:09:44.712107 kubelet[2663]: I0517 00:09:44.712074 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" May 17 00:09:44.712830 containerd[1478]: time="2025-05-17T00:09:44.712638396Z" level=info msg="StopPodSandbox for \"64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4\"" May 17 00:09:44.714075 containerd[1478]: time="2025-05-17T00:09:44.713992778Z" level=info msg="Ensure that sandbox 64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4 in task-service has been cleanup successfully" May 17 00:09:44.731266 kubelet[2663]: I0517 00:09:44.731157 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" May 17 00:09:44.736148 containerd[1478]: time="2025-05-17T00:09:44.736036061Z" level=info msg="StopPodSandbox for \"80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227\"" May 17 00:09:44.736281 containerd[1478]: time="2025-05-17T00:09:44.736206783Z" level=info msg="Ensure that sandbox 80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227 in task-service has been cleanup successfully" May 17 00:09:44.748662 kubelet[2663]: I0517 00:09:44.747879 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" May 17 00:09:44.749184 containerd[1478]: time="2025-05-17T00:09:44.749157156Z" level=info msg="StopPodSandbox for \"48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36\"" May 17 00:09:44.750998 containerd[1478]: time="2025-05-17T00:09:44.750713862Z" level=info msg="Ensure that sandbox 48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36 in task-service has been cleanup successfully" May 17 00:09:44.771216 containerd[1478]: time="2025-05-17T00:09:44.771056156Z" level=error msg="StopPodSandbox for \"acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633\" failed" error="failed to destroy network for sandbox \"acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.772409 kubelet[2663]: E0517 00:09:44.772294 2663 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" May 17 00:09:44.772944 kubelet[2663]: E0517 00:09:44.772491 2663 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633"} May 17 00:09:44.772944 kubelet[2663]: E0517 00:09:44.772551 2663 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b9015c7c-53fd-4356-9465-8e7b4b00eae6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:09:44.772944 kubelet[2663]: E0517 00:09:44.772576 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b9015c7c-53fd-4356-9465-8e7b4b00eae6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-5rmrz" podUID="b9015c7c-53fd-4356-9465-8e7b4b00eae6" May 17 00:09:44.788808 containerd[1478]: time="2025-05-17T00:09:44.788586045Z" level=error msg="StopPodSandbox for \"405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9\" failed" error="failed to destroy network for sandbox \"405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.789149 kubelet[2663]: E0517 00:09:44.789113 2663 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" May 17 00:09:44.789801 kubelet[2663]: E0517 00:09:44.789551 2663 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9"} May 17 00:09:44.789801 kubelet[2663]: E0517 00:09:44.789593 2663 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bf7f5509-6eae-41c9-a82c-194eb8fdf825\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:09:44.789801 kubelet[2663]: E0517 00:09:44.789614 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bf7f5509-6eae-41c9-a82c-194eb8fdf825\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g9lf2" podUID="bf7f5509-6eae-41c9-a82c-194eb8fdf825" May 17 00:09:44.795914 containerd[1478]: time="2025-05-17T00:09:44.795796323Z" level=error msg="StopPodSandbox for \"6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be\" failed" error="failed to destroy network for sandbox \"6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.797362 kubelet[2663]: E0517 00:09:44.797166 2663 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" May 17 00:09:44.797362 kubelet[2663]: E0517 00:09:44.797238 2663 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be"} May 17 00:09:44.797362 kubelet[2663]: E0517 00:09:44.797272 2663 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cc5a0d8a-21f4-4d9d-bd13-f95a58141562\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:09:44.797362 kubelet[2663]: E0517 00:09:44.797329 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cc5a0d8a-21f4-4d9d-bd13-f95a58141562\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-654599565b-85g84" podUID="cc5a0d8a-21f4-4d9d-bd13-f95a58141562" May 17 00:09:44.801482 containerd[1478]: time="2025-05-17T00:09:44.801415456Z" level=error msg="StopPodSandbox for \"b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f\" failed" error="failed to destroy network for sandbox \"b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.801899 kubelet[2663]: E0517 00:09:44.801658 2663 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" May 17 00:09:44.801899 kubelet[2663]: E0517 00:09:44.801722 2663 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f"} May 17 00:09:44.801899 kubelet[2663]: E0517 00:09:44.801753 2663 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9be07d70-67ab-4460-b534-d03025d5bc29\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:09:44.801899 kubelet[2663]: E0517 00:09:44.801774 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9be07d70-67ab-4460-b534-d03025d5bc29\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b858dbd84-d62lt" podUID="9be07d70-67ab-4460-b534-d03025d5bc29" May 17 00:09:44.814192 containerd[1478]: time="2025-05-17T00:09:44.814140425Z" level=error msg="StopPodSandbox for \"18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a\" failed" error="failed to destroy network for sandbox \"18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.815413 kubelet[2663]: E0517 00:09:44.814369 2663 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" May 17 00:09:44.815413 kubelet[2663]: E0517 00:09:44.814561 2663 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a"} May 17 00:09:44.815413 kubelet[2663]: E0517 00:09:44.814593 2663 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5021f4d9-7c3c-411f-be83-7378c09cd50b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:09:44.815413 kubelet[2663]: E0517 00:09:44.814613 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5021f4d9-7c3c-411f-be83-7378c09cd50b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-654599565b-lpjvt" podUID="5021f4d9-7c3c-411f-be83-7378c09cd50b" May 17 00:09:44.828871 containerd[1478]: time="2025-05-17T00:09:44.828818066Z" level=error msg="StopPodSandbox for \"48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36\" failed" error="failed to destroy network for sandbox \"48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.829335 kubelet[2663]: E0517 00:09:44.829189 2663 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" May 17 00:09:44.829335 kubelet[2663]: E0517 00:09:44.829241 2663 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36"} May 17 00:09:44.829335 kubelet[2663]: E0517 00:09:44.829279 2663 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"662625c5-921c-406a-9ef1-d2e70e33e339\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:09:44.829335 kubelet[2663]: E0517 00:09:44.829300 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"662625c5-921c-406a-9ef1-d2e70e33e339\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-gp74j" podUID="662625c5-921c-406a-9ef1-d2e70e33e339" May 17 00:09:44.829985 containerd[1478]: time="2025-05-17T00:09:44.829881724Z" level=error msg="StopPodSandbox for \"64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4\" failed" error="failed to destroy network for sandbox \"64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.830246 kubelet[2663]: E0517 00:09:44.830134 2663 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" May 17 00:09:44.830246 kubelet[2663]: E0517 00:09:44.830175 2663 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4"} May 17 00:09:44.830246 kubelet[2663]: E0517 00:09:44.830201 2663 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6123d484-fe1b-4bcd-a0b1-5639f75795e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:09:44.830246 kubelet[2663]: E0517 00:09:44.830219 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6123d484-fe1b-4bcd-a0b1-5639f75795e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-2pp2r" podUID="6123d484-fe1b-4bcd-a0b1-5639f75795e9" May 17 00:09:44.830997 containerd[1478]: time="2025-05-17T00:09:44.830862380Z" level=error msg="StopPodSandbox for \"80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227\" failed" error="failed to destroy network for sandbox \"80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:09:44.831100 kubelet[2663]: E0517 00:09:44.831047 2663 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" May 17 00:09:44.831166 kubelet[2663]: E0517 00:09:44.831107 2663 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227"} May 17 00:09:44.831166 kubelet[2663]: E0517 00:09:44.831131 2663 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e26e8c4a-c5e6-41d8-924e-19ce27b5f4cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:09:44.831239 kubelet[2663]: E0517 00:09:44.831210 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e26e8c4a-c5e6-41d8-924e-19ce27b5f4cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-645f967f78-7knkm" podUID="e26e8c4a-c5e6-41d8-924e-19ce27b5f4cd" May 17 00:09:44.922038 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227-shm.mount: Deactivated successfully. May 17 00:09:44.922133 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633-shm.mount: Deactivated successfully. May 17 00:09:51.222925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount106287649.mount: Deactivated successfully. May 17 00:09:51.251612 containerd[1478]: time="2025-05-17T00:09:51.251561507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:51.253515 containerd[1478]: time="2025-05-17T00:09:51.253475139Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=150465379" May 17 00:09:51.254469 containerd[1478]: time="2025-05-17T00:09:51.254383035Z" level=info msg="ImageCreate event name:\"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:51.269940 containerd[1478]: time="2025-05-17T00:09:51.269686737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:51.270667 containerd[1478]: time="2025-05-17T00:09:51.270260987Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"150465241\" in 6.588086931s" May 17 00:09:51.270667 containerd[1478]: time="2025-05-17T00:09:51.270296507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\"" May 17 00:09:51.299901 containerd[1478]: time="2025-05-17T00:09:51.299830453Z" level=info msg="CreateContainer within sandbox \"fc06f5e5ecf04bc3cf9988ddda034841d7630ba2df6f32e74ad1b676dfbe1305\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 00:09:51.321843 containerd[1478]: time="2025-05-17T00:09:51.321653466Z" level=info msg="CreateContainer within sandbox \"fc06f5e5ecf04bc3cf9988ddda034841d7630ba2df6f32e74ad1b676dfbe1305\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9fba8f4a00bb9ea64c1ca510402ea2c1b7269e2b5fe5773b4330d5df902fc26b\"" May 17 00:09:51.322700 containerd[1478]: time="2025-05-17T00:09:51.322573842Z" level=info msg="StartContainer for \"9fba8f4a00bb9ea64c1ca510402ea2c1b7269e2b5fe5773b4330d5df902fc26b\"" May 17 00:09:51.354653 systemd[1]: Started cri-containerd-9fba8f4a00bb9ea64c1ca510402ea2c1b7269e2b5fe5773b4330d5df902fc26b.scope - libcontainer container 9fba8f4a00bb9ea64c1ca510402ea2c1b7269e2b5fe5773b4330d5df902fc26b. May 17 00:09:51.406697 containerd[1478]: time="2025-05-17T00:09:51.406597440Z" level=info msg="StartContainer for \"9fba8f4a00bb9ea64c1ca510402ea2c1b7269e2b5fe5773b4330d5df902fc26b\" returns successfully" May 17 00:09:51.557878 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 00:09:51.558881 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 00:09:51.701050 containerd[1478]: time="2025-05-17T00:09:51.701004479Z" level=info msg="StopPodSandbox for \"b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f\"" May 17 00:09:51.814985 kubelet[2663]: I0517 00:09:51.814843 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qvzs6" podStartSLOduration=1.273072139 podStartE2EDuration="15.814824587s" podCreationTimestamp="2025-05-17 00:09:36 +0000 UTC" firstStartedPulling="2025-05-17 00:09:36.731468469 +0000 UTC m=+21.321311122" lastFinishedPulling="2025-05-17 00:09:51.273220917 +0000 UTC m=+35.863063570" observedRunningTime="2025-05-17 00:09:51.798127701 +0000 UTC m=+36.387970354" watchObservedRunningTime="2025-05-17 00:09:51.814824587 +0000 UTC m=+36.404667240" May 17 00:09:51.890944 containerd[1478]: 2025-05-17 00:09:51.817 [INFO][3823] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" May 17 00:09:51.890944 containerd[1478]: 2025-05-17 00:09:51.817 [INFO][3823] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" iface="eth0" netns="/var/run/netns/cni-bb45e8b1-5359-75b8-fd51-b2df68784253" May 17 00:09:51.890944 containerd[1478]: 2025-05-17 00:09:51.817 [INFO][3823] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" iface="eth0" netns="/var/run/netns/cni-bb45e8b1-5359-75b8-fd51-b2df68784253" May 17 00:09:51.890944 containerd[1478]: 2025-05-17 00:09:51.817 [INFO][3823] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" iface="eth0" netns="/var/run/netns/cni-bb45e8b1-5359-75b8-fd51-b2df68784253" May 17 00:09:51.890944 containerd[1478]: 2025-05-17 00:09:51.817 [INFO][3823] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" May 17 00:09:51.890944 containerd[1478]: 2025-05-17 00:09:51.817 [INFO][3823] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" May 17 00:09:51.890944 containerd[1478]: 2025-05-17 00:09:51.864 [INFO][3837] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" HandleID="k8s-pod-network.b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-whisker--b858dbd84--d62lt-eth0" May 17 00:09:51.890944 containerd[1478]: 2025-05-17 00:09:51.864 [INFO][3837] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:09:51.890944 containerd[1478]: 2025-05-17 00:09:51.864 [INFO][3837] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:09:51.890944 containerd[1478]: 2025-05-17 00:09:51.881 [WARNING][3837] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" HandleID="k8s-pod-network.b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-whisker--b858dbd84--d62lt-eth0" May 17 00:09:51.890944 containerd[1478]: 2025-05-17 00:09:51.881 [INFO][3837] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" HandleID="k8s-pod-network.b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-whisker--b858dbd84--d62lt-eth0" May 17 00:09:51.890944 containerd[1478]: 2025-05-17 00:09:51.885 [INFO][3837] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:09:51.890944 containerd[1478]: 2025-05-17 00:09:51.888 [INFO][3823] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" May 17 00:09:51.892414 containerd[1478]: time="2025-05-17T00:09:51.891086412Z" level=info msg="TearDown network for sandbox \"b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f\" successfully" May 17 00:09:51.892414 containerd[1478]: time="2025-05-17T00:09:51.891113653Z" level=info msg="StopPodSandbox for \"b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f\" returns successfully" May 17 00:09:52.004093 kubelet[2663]: I0517 00:09:52.003095 2663 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rjhb\" (UniqueName: \"kubernetes.io/projected/9be07d70-67ab-4460-b534-d03025d5bc29-kube-api-access-2rjhb\") pod \"9be07d70-67ab-4460-b534-d03025d5bc29\" (UID: \"9be07d70-67ab-4460-b534-d03025d5bc29\") " May 17 00:09:52.004093 kubelet[2663]: I0517 00:09:52.003175 2663 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9be07d70-67ab-4460-b534-d03025d5bc29-whisker-backend-key-pair\") pod \"9be07d70-67ab-4460-b534-d03025d5bc29\" (UID: \"9be07d70-67ab-4460-b534-d03025d5bc29\") " May 17 00:09:52.004093 kubelet[2663]: I0517 00:09:52.003202 2663 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9be07d70-67ab-4460-b534-d03025d5bc29-whisker-ca-bundle\") pod \"9be07d70-67ab-4460-b534-d03025d5bc29\" (UID: \"9be07d70-67ab-4460-b534-d03025d5bc29\") " May 17 00:09:52.005323 kubelet[2663]: I0517 00:09:52.005262 2663 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9be07d70-67ab-4460-b534-d03025d5bc29-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9be07d70-67ab-4460-b534-d03025d5bc29" (UID: "9be07d70-67ab-4460-b534-d03025d5bc29"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:09:52.008137 kubelet[2663]: I0517 00:09:52.008103 2663 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9be07d70-67ab-4460-b534-d03025d5bc29-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9be07d70-67ab-4460-b534-d03025d5bc29" (UID: "9be07d70-67ab-4460-b534-d03025d5bc29"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:09:52.010073 kubelet[2663]: I0517 00:09:52.010029 2663 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9be07d70-67ab-4460-b534-d03025d5bc29-kube-api-access-2rjhb" (OuterVolumeSpecName: "kube-api-access-2rjhb") pod "9be07d70-67ab-4460-b534-d03025d5bc29" (UID: "9be07d70-67ab-4460-b534-d03025d5bc29"). InnerVolumeSpecName "kube-api-access-2rjhb". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:09:52.104633 kubelet[2663]: I0517 00:09:52.103592 2663 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9be07d70-67ab-4460-b534-d03025d5bc29-whisker-backend-key-pair\") on node \"ci-4081-3-3-n-3b0dbcbd78\" DevicePath \"\"" May 17 00:09:52.104633 kubelet[2663]: I0517 00:09:52.103636 2663 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9be07d70-67ab-4460-b534-d03025d5bc29-whisker-ca-bundle\") on node \"ci-4081-3-3-n-3b0dbcbd78\" DevicePath \"\"" May 17 00:09:52.104633 kubelet[2663]: I0517 00:09:52.103658 2663 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2rjhb\" (UniqueName: \"kubernetes.io/projected/9be07d70-67ab-4460-b534-d03025d5bc29-kube-api-access-2rjhb\") on node \"ci-4081-3-3-n-3b0dbcbd78\" DevicePath \"\"" May 17 00:09:52.224301 systemd[1]: run-netns-cni\x2dbb45e8b1\x2d5359\x2d75b8\x2dfd51\x2db2df68784253.mount: Deactivated successfully. May 17 00:09:52.224559 systemd[1]: var-lib-kubelet-pods-9be07d70\x2d67ab\x2d4460\x2db534\x2dd03025d5bc29-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2rjhb.mount: Deactivated successfully. May 17 00:09:52.224680 systemd[1]: var-lib-kubelet-pods-9be07d70\x2d67ab\x2d4460\x2db534\x2dd03025d5bc29-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 17 00:09:52.770793 kubelet[2663]: I0517 00:09:52.770375 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:09:52.776985 systemd[1]: Removed slice kubepods-besteffort-pod9be07d70_67ab_4460_b534_d03025d5bc29.slice - libcontainer container kubepods-besteffort-pod9be07d70_67ab_4460_b534_d03025d5bc29.slice. May 17 00:09:52.855420 systemd[1]: Created slice kubepods-besteffort-pod2bf795f3_2e82_40ad_8128_ea6e3a8aa689.slice - libcontainer container kubepods-besteffort-pod2bf795f3_2e82_40ad_8128_ea6e3a8aa689.slice. May 17 00:09:52.908893 kubelet[2663]: I0517 00:09:52.908708 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2bf795f3-2e82-40ad-8128-ea6e3a8aa689-whisker-backend-key-pair\") pod \"whisker-57fb66bd94-5kh2z\" (UID: \"2bf795f3-2e82-40ad-8128-ea6e3a8aa689\") " pod="calico-system/whisker-57fb66bd94-5kh2z" May 17 00:09:52.908893 kubelet[2663]: I0517 00:09:52.908790 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2bf795f3-2e82-40ad-8128-ea6e3a8aa689-whisker-ca-bundle\") pod \"whisker-57fb66bd94-5kh2z\" (UID: \"2bf795f3-2e82-40ad-8128-ea6e3a8aa689\") " pod="calico-system/whisker-57fb66bd94-5kh2z" May 17 00:09:52.909354 kubelet[2663]: I0517 00:09:52.908910 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5z7t\" (UniqueName: \"kubernetes.io/projected/2bf795f3-2e82-40ad-8128-ea6e3a8aa689-kube-api-access-n5z7t\") pod \"whisker-57fb66bd94-5kh2z\" (UID: \"2bf795f3-2e82-40ad-8128-ea6e3a8aa689\") " pod="calico-system/whisker-57fb66bd94-5kh2z" May 17 00:09:53.161860 containerd[1478]: time="2025-05-17T00:09:53.161709710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57fb66bd94-5kh2z,Uid:2bf795f3-2e82-40ad-8128-ea6e3a8aa689,Namespace:calico-system,Attempt:0,}" May 17 00:09:53.411781 systemd-networkd[1361]: cali534de585569: Link UP May 17 00:09:53.417518 systemd-networkd[1361]: cali534de585569: Gained carrier May 17 00:09:53.451643 containerd[1478]: 2025-05-17 00:09:53.223 [INFO][3945] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:09:53.451643 containerd[1478]: 2025-05-17 00:09:53.259 [INFO][3945] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--3b0dbcbd78-k8s-whisker--57fb66bd94--5kh2z-eth0 whisker-57fb66bd94- calico-system 2bf795f3-2e82-40ad-8128-ea6e3a8aa689 889 0 2025-05-17 00:09:52 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:57fb66bd94 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-3-n-3b0dbcbd78 whisker-57fb66bd94-5kh2z eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali534de585569 [] [] }} ContainerID="afdc02776c5c96583111c3da0298e4c9aeb01414d11ec79149f1a9ac00bc6cf9" Namespace="calico-system" Pod="whisker-57fb66bd94-5kh2z" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-whisker--57fb66bd94--5kh2z-" May 17 00:09:53.451643 containerd[1478]: 2025-05-17 00:09:53.260 [INFO][3945] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="afdc02776c5c96583111c3da0298e4c9aeb01414d11ec79149f1a9ac00bc6cf9" Namespace="calico-system" Pod="whisker-57fb66bd94-5kh2z" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-whisker--57fb66bd94--5kh2z-eth0" May 17 00:09:53.451643 containerd[1478]: 2025-05-17 00:09:53.321 [INFO][3961] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="afdc02776c5c96583111c3da0298e4c9aeb01414d11ec79149f1a9ac00bc6cf9" HandleID="k8s-pod-network.afdc02776c5c96583111c3da0298e4c9aeb01414d11ec79149f1a9ac00bc6cf9" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-whisker--57fb66bd94--5kh2z-eth0" May 17 00:09:53.451643 containerd[1478]: 2025-05-17 00:09:53.323 [INFO][3961] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="afdc02776c5c96583111c3da0298e4c9aeb01414d11ec79149f1a9ac00bc6cf9" HandleID="k8s-pod-network.afdc02776c5c96583111c3da0298e4c9aeb01414d11ec79149f1a9ac00bc6cf9" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-whisker--57fb66bd94--5kh2z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cf020), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-3b0dbcbd78", "pod":"whisker-57fb66bd94-5kh2z", "timestamp":"2025-05-17 00:09:53.321718315 +0000 UTC"}, Hostname:"ci-4081-3-3-n-3b0dbcbd78", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:09:53.451643 containerd[1478]: 2025-05-17 00:09:53.323 [INFO][3961] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:09:53.451643 containerd[1478]: 2025-05-17 00:09:53.324 [INFO][3961] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:09:53.451643 containerd[1478]: 2025-05-17 00:09:53.324 [INFO][3961] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-3b0dbcbd78' May 17 00:09:53.451643 containerd[1478]: 2025-05-17 00:09:53.341 [INFO][3961] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.afdc02776c5c96583111c3da0298e4c9aeb01414d11ec79149f1a9ac00bc6cf9" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:53.451643 containerd[1478]: 2025-05-17 00:09:53.352 [INFO][3961] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:53.451643 containerd[1478]: 2025-05-17 00:09:53.362 [INFO][3961] ipam/ipam.go 511: Trying affinity for 192.168.2.0/26 host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:53.451643 containerd[1478]: 2025-05-17 00:09:53.366 [INFO][3961] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.0/26 host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:53.451643 containerd[1478]: 2025-05-17 00:09:53.369 [INFO][3961] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:53.451643 containerd[1478]: 2025-05-17 00:09:53.370 [INFO][3961] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.afdc02776c5c96583111c3da0298e4c9aeb01414d11ec79149f1a9ac00bc6cf9" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:53.451643 containerd[1478]: 2025-05-17 00:09:53.374 [INFO][3961] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.afdc02776c5c96583111c3da0298e4c9aeb01414d11ec79149f1a9ac00bc6cf9 May 17 00:09:53.451643 containerd[1478]: 2025-05-17 00:09:53.384 [INFO][3961] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.afdc02776c5c96583111c3da0298e4c9aeb01414d11ec79149f1a9ac00bc6cf9" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:53.451643 containerd[1478]: 2025-05-17 00:09:53.391 [INFO][3961] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.1/26] block=192.168.2.0/26 handle="k8s-pod-network.afdc02776c5c96583111c3da0298e4c9aeb01414d11ec79149f1a9ac00bc6cf9" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:53.451643 containerd[1478]: 2025-05-17 00:09:53.391 [INFO][3961] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.1/26] handle="k8s-pod-network.afdc02776c5c96583111c3da0298e4c9aeb01414d11ec79149f1a9ac00bc6cf9" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:53.451643 containerd[1478]: 2025-05-17 00:09:53.392 [INFO][3961] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:09:53.451643 containerd[1478]: 2025-05-17 00:09:53.392 [INFO][3961] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.1/26] IPv6=[] ContainerID="afdc02776c5c96583111c3da0298e4c9aeb01414d11ec79149f1a9ac00bc6cf9" HandleID="k8s-pod-network.afdc02776c5c96583111c3da0298e4c9aeb01414d11ec79149f1a9ac00bc6cf9" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-whisker--57fb66bd94--5kh2z-eth0" May 17 00:09:53.452520 containerd[1478]: 2025-05-17 00:09:53.396 [INFO][3945] cni-plugin/k8s.go 418: Populated endpoint ContainerID="afdc02776c5c96583111c3da0298e4c9aeb01414d11ec79149f1a9ac00bc6cf9" Namespace="calico-system" Pod="whisker-57fb66bd94-5kh2z" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-whisker--57fb66bd94--5kh2z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3b0dbcbd78-k8s-whisker--57fb66bd94--5kh2z-eth0", GenerateName:"whisker-57fb66bd94-", Namespace:"calico-system", SelfLink:"", UID:"2bf795f3-2e82-40ad-8128-ea6e3a8aa689", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57fb66bd94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3b0dbcbd78", ContainerID:"", Pod:"whisker-57fb66bd94-5kh2z", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.2.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali534de585569", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:09:53.452520 containerd[1478]: 2025-05-17 00:09:53.396 [INFO][3945] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.1/32] ContainerID="afdc02776c5c96583111c3da0298e4c9aeb01414d11ec79149f1a9ac00bc6cf9" Namespace="calico-system" Pod="whisker-57fb66bd94-5kh2z" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-whisker--57fb66bd94--5kh2z-eth0" May 17 00:09:53.452520 containerd[1478]: 2025-05-17 00:09:53.396 [INFO][3945] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali534de585569 ContainerID="afdc02776c5c96583111c3da0298e4c9aeb01414d11ec79149f1a9ac00bc6cf9" Namespace="calico-system" Pod="whisker-57fb66bd94-5kh2z" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-whisker--57fb66bd94--5kh2z-eth0" May 17 00:09:53.452520 containerd[1478]: 2025-05-17 00:09:53.419 [INFO][3945] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="afdc02776c5c96583111c3da0298e4c9aeb01414d11ec79149f1a9ac00bc6cf9" Namespace="calico-system" Pod="whisker-57fb66bd94-5kh2z" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-whisker--57fb66bd94--5kh2z-eth0" May 17 00:09:53.452520 containerd[1478]: 2025-05-17 00:09:53.424 [INFO][3945] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="afdc02776c5c96583111c3da0298e4c9aeb01414d11ec79149f1a9ac00bc6cf9" Namespace="calico-system" Pod="whisker-57fb66bd94-5kh2z" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-whisker--57fb66bd94--5kh2z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3b0dbcbd78-k8s-whisker--57fb66bd94--5kh2z-eth0", GenerateName:"whisker-57fb66bd94-", Namespace:"calico-system", SelfLink:"", UID:"2bf795f3-2e82-40ad-8128-ea6e3a8aa689", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57fb66bd94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3b0dbcbd78", ContainerID:"afdc02776c5c96583111c3da0298e4c9aeb01414d11ec79149f1a9ac00bc6cf9", Pod:"whisker-57fb66bd94-5kh2z", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.2.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali534de585569", MAC:"5a:cf:d9:34:fc:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:09:53.452520 containerd[1478]: 2025-05-17 00:09:53.444 [INFO][3945] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="afdc02776c5c96583111c3da0298e4c9aeb01414d11ec79149f1a9ac00bc6cf9" Namespace="calico-system" Pod="whisker-57fb66bd94-5kh2z" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-whisker--57fb66bd94--5kh2z-eth0" May 17 00:09:53.496763 containerd[1478]: time="2025-05-17T00:09:53.492629188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:53.496763 containerd[1478]: time="2025-05-17T00:09:53.493820209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:53.496763 containerd[1478]: time="2025-05-17T00:09:53.493835409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:53.496763 containerd[1478]: time="2025-05-17T00:09:53.493937371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:53.531663 systemd[1]: Started cri-containerd-afdc02776c5c96583111c3da0298e4c9aeb01414d11ec79149f1a9ac00bc6cf9.scope - libcontainer container afdc02776c5c96583111c3da0298e4c9aeb01414d11ec79149f1a9ac00bc6cf9. May 17 00:09:53.555029 kubelet[2663]: I0517 00:09:53.554969 2663 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9be07d70-67ab-4460-b534-d03025d5bc29" path="/var/lib/kubelet/pods/9be07d70-67ab-4460-b534-d03025d5bc29/volumes" May 17 00:09:53.579464 kernel: bpftool[4034]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 17 00:09:53.605571 containerd[1478]: time="2025-05-17T00:09:53.605337656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57fb66bd94-5kh2z,Uid:2bf795f3-2e82-40ad-8128-ea6e3a8aa689,Namespace:calico-system,Attempt:0,} returns sandbox id \"afdc02776c5c96583111c3da0298e4c9aeb01414d11ec79149f1a9ac00bc6cf9\"" May 17 00:09:53.611067 containerd[1478]: time="2025-05-17T00:09:53.610872672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:09:53.798286 systemd-networkd[1361]: vxlan.calico: Link UP May 17 00:09:53.798292 systemd-networkd[1361]: vxlan.calico: Gained carrier May 17 00:09:53.849275 containerd[1478]: time="2025-05-17T00:09:53.844083742Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:09:53.849275 containerd[1478]: time="2025-05-17T00:09:53.847998810Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:09:53.849275 containerd[1478]: time="2025-05-17T00:09:53.848125532Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:09:53.853238 kubelet[2663]: E0517 00:09:53.852821 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:09:53.853238 kubelet[2663]: E0517 00:09:53.852923 2663 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:09:53.883739 kubelet[2663]: E0517 00:09:53.883631 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:75cc68a695ad45e1ba6d530375dd3c59,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n5z7t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57fb66bd94-5kh2z_calico-system(2bf795f3-2e82-40ad-8128-ea6e3a8aa689): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:09:53.886883 containerd[1478]: time="2025-05-17T00:09:53.886182189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:09:54.123329 containerd[1478]: time="2025-05-17T00:09:54.123126574Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:09:54.126042 containerd[1478]: time="2025-05-17T00:09:54.125663338Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:09:54.126042 containerd[1478]: time="2025-05-17T00:09:54.125969103Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:09:54.126474 kubelet[2663]: E0517 00:09:54.126346 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:09:54.126474 kubelet[2663]: E0517 00:09:54.126409 2663 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:09:54.128131 kubelet[2663]: E0517 00:09:54.126580 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5z7t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57fb66bd94-5kh2z_calico-system(2bf795f3-2e82-40ad-8128-ea6e3a8aa689): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:09:54.128721 kubelet[2663]: E0517 00:09:54.128614 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57fb66bd94-5kh2z" podUID="2bf795f3-2e82-40ad-8128-ea6e3a8aa689" May 17 00:09:54.780241 kubelet[2663]: E0517 00:09:54.780119 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57fb66bd94-5kh2z" podUID="2bf795f3-2e82-40ad-8128-ea6e3a8aa689" May 17 00:09:54.821573 systemd-networkd[1361]: cali534de585569: Gained IPv6LL May 17 00:09:55.780837 systemd-networkd[1361]: vxlan.calico: Gained IPv6LL May 17 00:09:57.525152 containerd[1478]: time="2025-05-17T00:09:57.525116817Z" level=info msg="StopPodSandbox for \"6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be\"" May 17 00:09:57.526393 containerd[1478]: time="2025-05-17T00:09:57.525216979Z" level=info msg="StopPodSandbox for \"405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9\"" May 17 00:09:57.691592 containerd[1478]: 2025-05-17 00:09:57.603 [INFO][4143] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" May 17 00:09:57.691592 containerd[1478]: 2025-05-17 00:09:57.604 [INFO][4143] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" iface="eth0" netns="/var/run/netns/cni-20d9779e-dab6-60c7-e3ee-398dfe4122ee" May 17 00:09:57.691592 containerd[1478]: 2025-05-17 00:09:57.605 [INFO][4143] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" iface="eth0" netns="/var/run/netns/cni-20d9779e-dab6-60c7-e3ee-398dfe4122ee" May 17 00:09:57.691592 containerd[1478]: 2025-05-17 00:09:57.606 [INFO][4143] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" iface="eth0" netns="/var/run/netns/cni-20d9779e-dab6-60c7-e3ee-398dfe4122ee" May 17 00:09:57.691592 containerd[1478]: 2025-05-17 00:09:57.606 [INFO][4143] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" May 17 00:09:57.691592 containerd[1478]: 2025-05-17 00:09:57.606 [INFO][4143] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" May 17 00:09:57.691592 containerd[1478]: 2025-05-17 00:09:57.654 [INFO][4156] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" HandleID="k8s-pod-network.6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--85g84-eth0" May 17 00:09:57.691592 containerd[1478]: 2025-05-17 00:09:57.654 [INFO][4156] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:09:57.691592 containerd[1478]: 2025-05-17 00:09:57.654 [INFO][4156] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:09:57.691592 containerd[1478]: 2025-05-17 00:09:57.670 [WARNING][4156] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" HandleID="k8s-pod-network.6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--85g84-eth0" May 17 00:09:57.691592 containerd[1478]: 2025-05-17 00:09:57.670 [INFO][4156] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" HandleID="k8s-pod-network.6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--85g84-eth0" May 17 00:09:57.691592 containerd[1478]: 2025-05-17 00:09:57.680 [INFO][4156] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:09:57.691592 containerd[1478]: 2025-05-17 00:09:57.686 [INFO][4143] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" May 17 00:09:57.696303 systemd[1]: run-netns-cni\x2d20d9779e\x2ddab6\x2d60c7\x2de3ee\x2d398dfe4122ee.mount: Deactivated successfully. May 17 00:09:57.708845 containerd[1478]: time="2025-05-17T00:09:57.708398960Z" level=info msg="TearDown network for sandbox \"6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be\" successfully" May 17 00:09:57.709010 containerd[1478]: time="2025-05-17T00:09:57.708975730Z" level=info msg="StopPodSandbox for \"6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be\" returns successfully" May 17 00:09:57.710634 containerd[1478]: time="2025-05-17T00:09:57.710583718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654599565b-85g84,Uid:cc5a0d8a-21f4-4d9d-bd13-f95a58141562,Namespace:calico-apiserver,Attempt:1,}" May 17 00:09:57.742245 containerd[1478]: 2025-05-17 00:09:57.632 [INFO][4144] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" May 17 00:09:57.742245 containerd[1478]: 2025-05-17 00:09:57.632 [INFO][4144] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" iface="eth0" netns="/var/run/netns/cni-72d54126-ca56-7fd0-5552-2f24349df238" May 17 00:09:57.742245 containerd[1478]: 2025-05-17 00:09:57.633 [INFO][4144] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" iface="eth0" netns="/var/run/netns/cni-72d54126-ca56-7fd0-5552-2f24349df238" May 17 00:09:57.742245 containerd[1478]: 2025-05-17 00:09:57.634 [INFO][4144] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" iface="eth0" netns="/var/run/netns/cni-72d54126-ca56-7fd0-5552-2f24349df238" May 17 00:09:57.742245 containerd[1478]: 2025-05-17 00:09:57.634 [INFO][4144] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" May 17 00:09:57.742245 containerd[1478]: 2025-05-17 00:09:57.634 [INFO][4144] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" May 17 00:09:57.742245 containerd[1478]: 2025-05-17 00:09:57.692 [INFO][4162] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" HandleID="k8s-pod-network.405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-csi--node--driver--g9lf2-eth0" May 17 00:09:57.742245 containerd[1478]: 2025-05-17 00:09:57.695 [INFO][4162] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:09:57.742245 containerd[1478]: 2025-05-17 00:09:57.695 [INFO][4162] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:09:57.742245 containerd[1478]: 2025-05-17 00:09:57.729 [WARNING][4162] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" HandleID="k8s-pod-network.405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-csi--node--driver--g9lf2-eth0" May 17 00:09:57.742245 containerd[1478]: 2025-05-17 00:09:57.729 [INFO][4162] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" HandleID="k8s-pod-network.405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-csi--node--driver--g9lf2-eth0" May 17 00:09:57.742245 containerd[1478]: 2025-05-17 00:09:57.733 [INFO][4162] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:09:57.742245 containerd[1478]: 2025-05-17 00:09:57.740 [INFO][4144] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" May 17 00:09:57.746871 containerd[1478]: time="2025-05-17T00:09:57.746821076Z" level=info msg="TearDown network for sandbox \"405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9\" successfully" May 17 00:09:57.746871 containerd[1478]: time="2025-05-17T00:09:57.746863756Z" level=info msg="StopPodSandbox for \"405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9\" returns successfully" May 17 00:09:57.747956 systemd[1]: run-netns-cni\x2d72d54126\x2dca56\x2d7fd0\x2d5552\x2d2f24349df238.mount: Deactivated successfully. May 17 00:09:57.750132 containerd[1478]: time="2025-05-17T00:09:57.749778928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g9lf2,Uid:bf7f5509-6eae-41c9-a82c-194eb8fdf825,Namespace:calico-system,Attempt:1,}" May 17 00:09:57.940353 systemd-networkd[1361]: cali039ce6022f9: Link UP May 17 00:09:57.941926 systemd-networkd[1361]: cali039ce6022f9: Gained carrier May 17 00:09:57.968563 containerd[1478]: 2025-05-17 00:09:57.832 [INFO][4171] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--85g84-eth0 calico-apiserver-654599565b- calico-apiserver cc5a0d8a-21f4-4d9d-bd13-f95a58141562 918 0 2025-05-17 00:09:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:654599565b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-n-3b0dbcbd78 calico-apiserver-654599565b-85g84 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali039ce6022f9 [] [] }} ContainerID="4db8268c0873154b35bef0066044e1d6af482044b459dfa803bef728752535ba" Namespace="calico-apiserver" Pod="calico-apiserver-654599565b-85g84" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--85g84-" May 17 00:09:57.968563 containerd[1478]: 2025-05-17 00:09:57.832 [INFO][4171] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4db8268c0873154b35bef0066044e1d6af482044b459dfa803bef728752535ba" Namespace="calico-apiserver" Pod="calico-apiserver-654599565b-85g84" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--85g84-eth0" May 17 00:09:57.968563 containerd[1478]: 2025-05-17 00:09:57.870 [INFO][4194] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4db8268c0873154b35bef0066044e1d6af482044b459dfa803bef728752535ba" HandleID="k8s-pod-network.4db8268c0873154b35bef0066044e1d6af482044b459dfa803bef728752535ba" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--85g84-eth0" May 17 00:09:57.968563 containerd[1478]: 2025-05-17 00:09:57.870 [INFO][4194] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4db8268c0873154b35bef0066044e1d6af482044b459dfa803bef728752535ba" HandleID="k8s-pod-network.4db8268c0873154b35bef0066044e1d6af482044b459dfa803bef728752535ba" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--85g84-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c5920), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-n-3b0dbcbd78", "pod":"calico-apiserver-654599565b-85g84", "timestamp":"2025-05-17 00:09:57.870720214 +0000 UTC"}, Hostname:"ci-4081-3-3-n-3b0dbcbd78", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:09:57.968563 containerd[1478]: 2025-05-17 00:09:57.871 [INFO][4194] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:09:57.968563 containerd[1478]: 2025-05-17 00:09:57.871 [INFO][4194] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:09:57.968563 containerd[1478]: 2025-05-17 00:09:57.871 [INFO][4194] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-3b0dbcbd78' May 17 00:09:57.968563 containerd[1478]: 2025-05-17 00:09:57.883 [INFO][4194] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4db8268c0873154b35bef0066044e1d6af482044b459dfa803bef728752535ba" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:57.968563 containerd[1478]: 2025-05-17 00:09:57.891 [INFO][4194] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:57.968563 containerd[1478]: 2025-05-17 00:09:57.902 [INFO][4194] ipam/ipam.go 511: Trying affinity for 192.168.2.0/26 host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:57.968563 containerd[1478]: 2025-05-17 00:09:57.905 [INFO][4194] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.0/26 host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:57.968563 containerd[1478]: 2025-05-17 00:09:57.909 [INFO][4194] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:57.968563 containerd[1478]: 2025-05-17 00:09:57.909 [INFO][4194] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.4db8268c0873154b35bef0066044e1d6af482044b459dfa803bef728752535ba" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:57.968563 containerd[1478]: 2025-05-17 00:09:57.912 [INFO][4194] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4db8268c0873154b35bef0066044e1d6af482044b459dfa803bef728752535ba May 17 00:09:57.968563 containerd[1478]: 2025-05-17 00:09:57.920 [INFO][4194] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.4db8268c0873154b35bef0066044e1d6af482044b459dfa803bef728752535ba" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:57.968563 containerd[1478]: 2025-05-17 00:09:57.928 [INFO][4194] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.2/26] block=192.168.2.0/26 handle="k8s-pod-network.4db8268c0873154b35bef0066044e1d6af482044b459dfa803bef728752535ba" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:57.968563 containerd[1478]: 2025-05-17 00:09:57.928 [INFO][4194] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.2/26] handle="k8s-pod-network.4db8268c0873154b35bef0066044e1d6af482044b459dfa803bef728752535ba" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:57.968563 containerd[1478]: 2025-05-17 00:09:57.928 [INFO][4194] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:09:57.968563 containerd[1478]: 2025-05-17 00:09:57.929 [INFO][4194] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.2/26] IPv6=[] ContainerID="4db8268c0873154b35bef0066044e1d6af482044b459dfa803bef728752535ba" HandleID="k8s-pod-network.4db8268c0873154b35bef0066044e1d6af482044b459dfa803bef728752535ba" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--85g84-eth0" May 17 00:09:57.970098 containerd[1478]: 2025-05-17 00:09:57.932 [INFO][4171] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4db8268c0873154b35bef0066044e1d6af482044b459dfa803bef728752535ba" Namespace="calico-apiserver" Pod="calico-apiserver-654599565b-85g84" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--85g84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--85g84-eth0", GenerateName:"calico-apiserver-654599565b-", Namespace:"calico-apiserver", SelfLink:"", UID:"cc5a0d8a-21f4-4d9d-bd13-f95a58141562", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"654599565b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3b0dbcbd78", ContainerID:"", Pod:"calico-apiserver-654599565b-85g84", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali039ce6022f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:09:57.970098 containerd[1478]: 2025-05-17 00:09:57.933 [INFO][4171] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.2/32] ContainerID="4db8268c0873154b35bef0066044e1d6af482044b459dfa803bef728752535ba" Namespace="calico-apiserver" Pod="calico-apiserver-654599565b-85g84" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--85g84-eth0" May 17 00:09:57.970098 containerd[1478]: 2025-05-17 00:09:57.933 [INFO][4171] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali039ce6022f9 ContainerID="4db8268c0873154b35bef0066044e1d6af482044b459dfa803bef728752535ba" Namespace="calico-apiserver" Pod="calico-apiserver-654599565b-85g84" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--85g84-eth0" May 17 00:09:57.970098 containerd[1478]: 2025-05-17 00:09:57.944 [INFO][4171] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4db8268c0873154b35bef0066044e1d6af482044b459dfa803bef728752535ba" Namespace="calico-apiserver" Pod="calico-apiserver-654599565b-85g84" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--85g84-eth0" May 17 00:09:57.970098 containerd[1478]: 2025-05-17 00:09:57.945 [INFO][4171] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4db8268c0873154b35bef0066044e1d6af482044b459dfa803bef728752535ba" Namespace="calico-apiserver" Pod="calico-apiserver-654599565b-85g84" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--85g84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--85g84-eth0", GenerateName:"calico-apiserver-654599565b-", Namespace:"calico-apiserver", SelfLink:"", UID:"cc5a0d8a-21f4-4d9d-bd13-f95a58141562", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"654599565b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3b0dbcbd78", ContainerID:"4db8268c0873154b35bef0066044e1d6af482044b459dfa803bef728752535ba", Pod:"calico-apiserver-654599565b-85g84", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali039ce6022f9", MAC:"d6:2d:b8:78:0f:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:09:57.970098 containerd[1478]: 2025-05-17 00:09:57.961 [INFO][4171] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4db8268c0873154b35bef0066044e1d6af482044b459dfa803bef728752535ba" Namespace="calico-apiserver" Pod="calico-apiserver-654599565b-85g84" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--85g84-eth0" May 17 00:09:57.995915 containerd[1478]: time="2025-05-17T00:09:57.995781653Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:57.996082 containerd[1478]: time="2025-05-17T00:09:57.995939776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:57.996082 containerd[1478]: time="2025-05-17T00:09:57.995967496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:57.996165 containerd[1478]: time="2025-05-17T00:09:57.996119499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:58.024914 systemd[1]: Started cri-containerd-4db8268c0873154b35bef0066044e1d6af482044b459dfa803bef728752535ba.scope - libcontainer container 4db8268c0873154b35bef0066044e1d6af482044b459dfa803bef728752535ba. May 17 00:09:58.054922 systemd-networkd[1361]: cali0056da4d310: Link UP May 17 00:09:58.055207 systemd-networkd[1361]: cali0056da4d310: Gained carrier May 17 00:09:58.078074 containerd[1478]: 2025-05-17 00:09:57.835 [INFO][4180] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--3b0dbcbd78-k8s-csi--node--driver--g9lf2-eth0 csi-node-driver- calico-system bf7f5509-6eae-41c9-a82c-194eb8fdf825 919 0 2025-05-17 00:09:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-3-n-3b0dbcbd78 csi-node-driver-g9lf2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0056da4d310 [] [] }} ContainerID="397e814ed288bdc3b2f38604f1a686ae9d938678ae3384aa254dfbc9b065f84d" Namespace="calico-system" Pod="csi-node-driver-g9lf2" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-csi--node--driver--g9lf2-" May 17 00:09:58.078074 containerd[1478]: 2025-05-17 00:09:57.835 [INFO][4180] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="397e814ed288bdc3b2f38604f1a686ae9d938678ae3384aa254dfbc9b065f84d" Namespace="calico-system" Pod="csi-node-driver-g9lf2" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-csi--node--driver--g9lf2-eth0" May 17 00:09:58.078074 containerd[1478]: 2025-05-17 00:09:57.877 [INFO][4199] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="397e814ed288bdc3b2f38604f1a686ae9d938678ae3384aa254dfbc9b065f84d" HandleID="k8s-pod-network.397e814ed288bdc3b2f38604f1a686ae9d938678ae3384aa254dfbc9b065f84d" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-csi--node--driver--g9lf2-eth0" May 17 00:09:58.078074 containerd[1478]: 2025-05-17 00:09:57.877 [INFO][4199] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="397e814ed288bdc3b2f38604f1a686ae9d938678ae3384aa254dfbc9b065f84d" HandleID="k8s-pod-network.397e814ed288bdc3b2f38604f1a686ae9d938678ae3384aa254dfbc9b065f84d" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-csi--node--driver--g9lf2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d76b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-3b0dbcbd78", "pod":"csi-node-driver-g9lf2", "timestamp":"2025-05-17 00:09:57.877196728 +0000 UTC"}, Hostname:"ci-4081-3-3-n-3b0dbcbd78", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:09:58.078074 containerd[1478]: 2025-05-17 00:09:57.877 [INFO][4199] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:09:58.078074 containerd[1478]: 2025-05-17 00:09:57.928 [INFO][4199] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:09:58.078074 containerd[1478]: 2025-05-17 00:09:57.929 [INFO][4199] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-3b0dbcbd78' May 17 00:09:58.078074 containerd[1478]: 2025-05-17 00:09:57.986 [INFO][4199] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.397e814ed288bdc3b2f38604f1a686ae9d938678ae3384aa254dfbc9b065f84d" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:58.078074 containerd[1478]: 2025-05-17 00:09:57.994 [INFO][4199] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:58.078074 containerd[1478]: 2025-05-17 00:09:58.003 [INFO][4199] ipam/ipam.go 511: Trying affinity for 192.168.2.0/26 host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:58.078074 containerd[1478]: 2025-05-17 00:09:58.008 [INFO][4199] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.0/26 host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:58.078074 containerd[1478]: 2025-05-17 00:09:58.014 [INFO][4199] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:58.078074 containerd[1478]: 2025-05-17 00:09:58.015 [INFO][4199] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.397e814ed288bdc3b2f38604f1a686ae9d938678ae3384aa254dfbc9b065f84d" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:58.078074 containerd[1478]: 2025-05-17 00:09:58.018 [INFO][4199] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.397e814ed288bdc3b2f38604f1a686ae9d938678ae3384aa254dfbc9b065f84d May 17 00:09:58.078074 containerd[1478]: 2025-05-17 00:09:58.029 [INFO][4199] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.397e814ed288bdc3b2f38604f1a686ae9d938678ae3384aa254dfbc9b065f84d" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:58.078074 containerd[1478]: 2025-05-17 00:09:58.040 [INFO][4199] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.3/26] block=192.168.2.0/26 handle="k8s-pod-network.397e814ed288bdc3b2f38604f1a686ae9d938678ae3384aa254dfbc9b065f84d" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:58.078074 containerd[1478]: 2025-05-17 00:09:58.040 [INFO][4199] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.3/26] handle="k8s-pod-network.397e814ed288bdc3b2f38604f1a686ae9d938678ae3384aa254dfbc9b065f84d" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:58.078074 containerd[1478]: 2025-05-17 00:09:58.040 [INFO][4199] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:09:58.078074 containerd[1478]: 2025-05-17 00:09:58.040 [INFO][4199] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.3/26] IPv6=[] ContainerID="397e814ed288bdc3b2f38604f1a686ae9d938678ae3384aa254dfbc9b065f84d" HandleID="k8s-pod-network.397e814ed288bdc3b2f38604f1a686ae9d938678ae3384aa254dfbc9b065f84d" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-csi--node--driver--g9lf2-eth0" May 17 00:09:58.079179 containerd[1478]: 2025-05-17 00:09:58.045 [INFO][4180] cni-plugin/k8s.go 418: Populated endpoint ContainerID="397e814ed288bdc3b2f38604f1a686ae9d938678ae3384aa254dfbc9b065f84d" Namespace="calico-system" Pod="csi-node-driver-g9lf2" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-csi--node--driver--g9lf2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3b0dbcbd78-k8s-csi--node--driver--g9lf2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bf7f5509-6eae-41c9-a82c-194eb8fdf825", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3b0dbcbd78", ContainerID:"", Pod:"csi-node-driver-g9lf2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0056da4d310", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:09:58.079179 containerd[1478]: 2025-05-17 00:09:58.045 [INFO][4180] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.3/32] ContainerID="397e814ed288bdc3b2f38604f1a686ae9d938678ae3384aa254dfbc9b065f84d" Namespace="calico-system" Pod="csi-node-driver-g9lf2" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-csi--node--driver--g9lf2-eth0" May 17 00:09:58.079179 containerd[1478]: 2025-05-17 00:09:58.045 [INFO][4180] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0056da4d310 ContainerID="397e814ed288bdc3b2f38604f1a686ae9d938678ae3384aa254dfbc9b065f84d" Namespace="calico-system" Pod="csi-node-driver-g9lf2" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-csi--node--driver--g9lf2-eth0" May 17 00:09:58.079179 containerd[1478]: 2025-05-17 00:09:58.056 [INFO][4180] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="397e814ed288bdc3b2f38604f1a686ae9d938678ae3384aa254dfbc9b065f84d" Namespace="calico-system" Pod="csi-node-driver-g9lf2" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-csi--node--driver--g9lf2-eth0" May 17 00:09:58.079179 containerd[1478]: 2025-05-17 00:09:58.058 [INFO][4180] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="397e814ed288bdc3b2f38604f1a686ae9d938678ae3384aa254dfbc9b065f84d" Namespace="calico-system" Pod="csi-node-driver-g9lf2" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-csi--node--driver--g9lf2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3b0dbcbd78-k8s-csi--node--driver--g9lf2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bf7f5509-6eae-41c9-a82c-194eb8fdf825", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3b0dbcbd78", ContainerID:"397e814ed288bdc3b2f38604f1a686ae9d938678ae3384aa254dfbc9b065f84d", Pod:"csi-node-driver-g9lf2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0056da4d310", MAC:"1e:69:48:f7:fe:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:09:58.079179 containerd[1478]: 2025-05-17 00:09:58.074 [INFO][4180] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="397e814ed288bdc3b2f38604f1a686ae9d938678ae3384aa254dfbc9b065f84d" Namespace="calico-system" Pod="csi-node-driver-g9lf2" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-csi--node--driver--g9lf2-eth0" May 17 00:09:58.098864 containerd[1478]: time="2025-05-17T00:09:58.098759031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654599565b-85g84,Uid:cc5a0d8a-21f4-4d9d-bd13-f95a58141562,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4db8268c0873154b35bef0066044e1d6af482044b459dfa803bef728752535ba\"" May 17 00:09:58.102787 containerd[1478]: time="2025-05-17T00:09:58.102575978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:09:58.119932 containerd[1478]: time="2025-05-17T00:09:58.119613199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:58.119932 containerd[1478]: time="2025-05-17T00:09:58.119737161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:58.120345 containerd[1478]: time="2025-05-17T00:09:58.120239330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:58.121337 containerd[1478]: time="2025-05-17T00:09:58.121019504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:58.143663 systemd[1]: Started cri-containerd-397e814ed288bdc3b2f38604f1a686ae9d938678ae3384aa254dfbc9b065f84d.scope - libcontainer container 397e814ed288bdc3b2f38604f1a686ae9d938678ae3384aa254dfbc9b065f84d. May 17 00:09:58.181479 containerd[1478]: time="2025-05-17T00:09:58.181393769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g9lf2,Uid:bf7f5509-6eae-41c9-a82c-194eb8fdf825,Namespace:calico-system,Attempt:1,} returns sandbox id \"397e814ed288bdc3b2f38604f1a686ae9d938678ae3384aa254dfbc9b065f84d\"" May 17 00:09:58.521933 containerd[1478]: time="2025-05-17T00:09:58.521319690Z" level=info msg="StopPodSandbox for \"48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36\"" May 17 00:09:58.521933 containerd[1478]: time="2025-05-17T00:09:58.521386331Z" level=info msg="StopPodSandbox for \"80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227\"" May 17 00:09:58.664624 containerd[1478]: 2025-05-17 00:09:58.598 [INFO][4331] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" May 17 00:09:58.664624 containerd[1478]: 2025-05-17 00:09:58.598 [INFO][4331] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" iface="eth0" netns="/var/run/netns/cni-bd068e16-af3e-5ead-41e8-bf8cd85c8362" May 17 00:09:58.664624 containerd[1478]: 2025-05-17 00:09:58.599 [INFO][4331] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" iface="eth0" netns="/var/run/netns/cni-bd068e16-af3e-5ead-41e8-bf8cd85c8362" May 17 00:09:58.664624 containerd[1478]: 2025-05-17 00:09:58.599 [INFO][4331] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" iface="eth0" netns="/var/run/netns/cni-bd068e16-af3e-5ead-41e8-bf8cd85c8362" May 17 00:09:58.664624 containerd[1478]: 2025-05-17 00:09:58.599 [INFO][4331] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" May 17 00:09:58.664624 containerd[1478]: 2025-05-17 00:09:58.599 [INFO][4331] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" May 17 00:09:58.664624 containerd[1478]: 2025-05-17 00:09:58.643 [INFO][4340] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" HandleID="k8s-pod-network.80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--kube--controllers--645f967f78--7knkm-eth0" May 17 00:09:58.664624 containerd[1478]: 2025-05-17 00:09:58.643 [INFO][4340] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:09:58.664624 containerd[1478]: 2025-05-17 00:09:58.643 [INFO][4340] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:09:58.664624 containerd[1478]: 2025-05-17 00:09:58.655 [WARNING][4340] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" HandleID="k8s-pod-network.80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--kube--controllers--645f967f78--7knkm-eth0" May 17 00:09:58.664624 containerd[1478]: 2025-05-17 00:09:58.655 [INFO][4340] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" HandleID="k8s-pod-network.80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--kube--controllers--645f967f78--7knkm-eth0" May 17 00:09:58.664624 containerd[1478]: 2025-05-17 00:09:58.657 [INFO][4340] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:09:58.664624 containerd[1478]: 2025-05-17 00:09:58.661 [INFO][4331] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" May 17 00:09:58.665580 containerd[1478]: time="2025-05-17T00:09:58.664843704Z" level=info msg="TearDown network for sandbox \"80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227\" successfully" May 17 00:09:58.665580 containerd[1478]: time="2025-05-17T00:09:58.664873984Z" level=info msg="StopPodSandbox for \"80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227\" returns successfully" May 17 00:09:58.666476 containerd[1478]: time="2025-05-17T00:09:58.665940523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-645f967f78-7knkm,Uid:e26e8c4a-c5e6-41d8-924e-19ce27b5f4cd,Namespace:calico-system,Attempt:1,}" May 17 00:09:58.689507 containerd[1478]: 2025-05-17 00:09:58.595 [INFO][4324] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" May 17 00:09:58.689507 containerd[1478]: 2025-05-17 00:09:58.596 [INFO][4324] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" iface="eth0" netns="/var/run/netns/cni-f07beaf6-2b9f-5bfd-4b82-47ed8c66b694" May 17 00:09:58.689507 containerd[1478]: 2025-05-17 00:09:58.596 [INFO][4324] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" iface="eth0" netns="/var/run/netns/cni-f07beaf6-2b9f-5bfd-4b82-47ed8c66b694" May 17 00:09:58.689507 containerd[1478]: 2025-05-17 00:09:58.600 [INFO][4324] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" iface="eth0" netns="/var/run/netns/cni-f07beaf6-2b9f-5bfd-4b82-47ed8c66b694" May 17 00:09:58.689507 containerd[1478]: 2025-05-17 00:09:58.601 [INFO][4324] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" May 17 00:09:58.689507 containerd[1478]: 2025-05-17 00:09:58.601 [INFO][4324] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" May 17 00:09:58.689507 containerd[1478]: 2025-05-17 00:09:58.644 [INFO][4342] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" HandleID="k8s-pod-network.48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-goldmane--78d55f7ddc--gp74j-eth0" May 17 00:09:58.689507 containerd[1478]: 2025-05-17 00:09:58.644 [INFO][4342] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:09:58.689507 containerd[1478]: 2025-05-17 00:09:58.657 [INFO][4342] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:09:58.689507 containerd[1478]: 2025-05-17 00:09:58.676 [WARNING][4342] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" HandleID="k8s-pod-network.48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-goldmane--78d55f7ddc--gp74j-eth0" May 17 00:09:58.689507 containerd[1478]: 2025-05-17 00:09:58.676 [INFO][4342] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" HandleID="k8s-pod-network.48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-goldmane--78d55f7ddc--gp74j-eth0" May 17 00:09:58.689507 containerd[1478]: 2025-05-17 00:09:58.678 [INFO][4342] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:09:58.689507 containerd[1478]: 2025-05-17 00:09:58.682 [INFO][4324] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" May 17 00:09:58.689507 containerd[1478]: time="2025-05-17T00:09:58.689354656Z" level=info msg="TearDown network for sandbox \"48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36\" successfully" May 17 00:09:58.689507 containerd[1478]: time="2025-05-17T00:09:58.689405497Z" level=info msg="StopPodSandbox for \"48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36\" returns successfully" May 17 00:09:58.690410 containerd[1478]: time="2025-05-17T00:09:58.690375634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-gp74j,Uid:662625c5-921c-406a-9ef1-d2e70e33e339,Namespace:calico-system,Attempt:1,}" May 17 00:09:58.701364 systemd[1]: run-netns-cni\x2df07beaf6\x2d2b9f\x2d5bfd\x2d4b82\x2d47ed8c66b694.mount: Deactivated successfully. May 17 00:09:58.701481 systemd[1]: run-netns-cni\x2dbd068e16\x2daf3e\x2d5ead\x2d41e8\x2dbf8cd85c8362.mount: Deactivated successfully. May 17 00:09:58.868499 systemd-networkd[1361]: cali18b610d7672: Link UP May 17 00:09:58.869735 systemd-networkd[1361]: cali18b610d7672: Gained carrier May 17 00:09:58.893249 containerd[1478]: 2025-05-17 00:09:58.751 [INFO][4355] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--3b0dbcbd78-k8s-calico--kube--controllers--645f967f78--7knkm-eth0 calico-kube-controllers-645f967f78- calico-system e26e8c4a-c5e6-41d8-924e-19ce27b5f4cd 930 0 2025-05-17 00:09:36 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:645f967f78 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-3-n-3b0dbcbd78 calico-kube-controllers-645f967f78-7knkm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali18b610d7672 [] [] }} ContainerID="e4691f2defc0e33fed4422e706ca0bfa4248b820561b8ae0221895b62213e76f" Namespace="calico-system" Pod="calico-kube-controllers-645f967f78-7knkm" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--kube--controllers--645f967f78--7knkm-" May 17 00:09:58.893249 containerd[1478]: 2025-05-17 00:09:58.751 [INFO][4355] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e4691f2defc0e33fed4422e706ca0bfa4248b820561b8ae0221895b62213e76f" Namespace="calico-system" Pod="calico-kube-controllers-645f967f78-7knkm" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--kube--controllers--645f967f78--7knkm-eth0" May 17 00:09:58.893249 containerd[1478]: 2025-05-17 00:09:58.804 [INFO][4378] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4691f2defc0e33fed4422e706ca0bfa4248b820561b8ae0221895b62213e76f" HandleID="k8s-pod-network.e4691f2defc0e33fed4422e706ca0bfa4248b820561b8ae0221895b62213e76f" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--kube--controllers--645f967f78--7knkm-eth0" May 17 00:09:58.893249 containerd[1478]: 2025-05-17 00:09:58.804 [INFO][4378] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e4691f2defc0e33fed4422e706ca0bfa4248b820561b8ae0221895b62213e76f" HandleID="k8s-pod-network.e4691f2defc0e33fed4422e706ca0bfa4248b820561b8ae0221895b62213e76f" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--kube--controllers--645f967f78--7knkm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cd050), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-3b0dbcbd78", "pod":"calico-kube-controllers-645f967f78-7knkm", "timestamp":"2025-05-17 00:09:58.804231244 +0000 UTC"}, Hostname:"ci-4081-3-3-n-3b0dbcbd78", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:09:58.893249 containerd[1478]: 2025-05-17 00:09:58.804 [INFO][4378] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:09:58.893249 containerd[1478]: 2025-05-17 00:09:58.804 [INFO][4378] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:09:58.893249 containerd[1478]: 2025-05-17 00:09:58.804 [INFO][4378] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-3b0dbcbd78' May 17 00:09:58.893249 containerd[1478]: 2025-05-17 00:09:58.819 [INFO][4378] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e4691f2defc0e33fed4422e706ca0bfa4248b820561b8ae0221895b62213e76f" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:58.893249 containerd[1478]: 2025-05-17 00:09:58.827 [INFO][4378] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:58.893249 containerd[1478]: 2025-05-17 00:09:58.836 [INFO][4378] ipam/ipam.go 511: Trying affinity for 192.168.2.0/26 host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:58.893249 containerd[1478]: 2025-05-17 00:09:58.839 [INFO][4378] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.0/26 host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:58.893249 containerd[1478]: 2025-05-17 00:09:58.841 [INFO][4378] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:58.893249 containerd[1478]: 2025-05-17 00:09:58.841 [INFO][4378] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.e4691f2defc0e33fed4422e706ca0bfa4248b820561b8ae0221895b62213e76f" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:58.893249 containerd[1478]: 2025-05-17 00:09:58.844 [INFO][4378] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e4691f2defc0e33fed4422e706ca0bfa4248b820561b8ae0221895b62213e76f May 17 00:09:58.893249 containerd[1478]: 2025-05-17 00:09:58.849 [INFO][4378] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.e4691f2defc0e33fed4422e706ca0bfa4248b820561b8ae0221895b62213e76f" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:58.893249 containerd[1478]: 2025-05-17 00:09:58.860 [INFO][4378] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.4/26] block=192.168.2.0/26 handle="k8s-pod-network.e4691f2defc0e33fed4422e706ca0bfa4248b820561b8ae0221895b62213e76f" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:58.893249 containerd[1478]: 2025-05-17 00:09:58.860 [INFO][4378] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.4/26] handle="k8s-pod-network.e4691f2defc0e33fed4422e706ca0bfa4248b820561b8ae0221895b62213e76f" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:58.893249 containerd[1478]: 2025-05-17 00:09:58.860 [INFO][4378] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:09:58.893249 containerd[1478]: 2025-05-17 00:09:58.860 [INFO][4378] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.4/26] IPv6=[] ContainerID="e4691f2defc0e33fed4422e706ca0bfa4248b820561b8ae0221895b62213e76f" HandleID="k8s-pod-network.e4691f2defc0e33fed4422e706ca0bfa4248b820561b8ae0221895b62213e76f" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--kube--controllers--645f967f78--7knkm-eth0" May 17 00:09:58.894328 containerd[1478]: 2025-05-17 00:09:58.866 [INFO][4355] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e4691f2defc0e33fed4422e706ca0bfa4248b820561b8ae0221895b62213e76f" Namespace="calico-system" Pod="calico-kube-controllers-645f967f78-7knkm" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--kube--controllers--645f967f78--7knkm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3b0dbcbd78-k8s-calico--kube--controllers--645f967f78--7knkm-eth0", GenerateName:"calico-kube-controllers-645f967f78-", Namespace:"calico-system", SelfLink:"", UID:"e26e8c4a-c5e6-41d8-924e-19ce27b5f4cd", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"645f967f78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3b0dbcbd78", ContainerID:"", Pod:"calico-kube-controllers-645f967f78-7knkm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali18b610d7672", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:09:58.894328 containerd[1478]: 2025-05-17 00:09:58.866 [INFO][4355] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.4/32] ContainerID="e4691f2defc0e33fed4422e706ca0bfa4248b820561b8ae0221895b62213e76f" Namespace="calico-system" Pod="calico-kube-controllers-645f967f78-7knkm" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--kube--controllers--645f967f78--7knkm-eth0" May 17 00:09:58.894328 containerd[1478]: 2025-05-17 00:09:58.866 [INFO][4355] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18b610d7672 ContainerID="e4691f2defc0e33fed4422e706ca0bfa4248b820561b8ae0221895b62213e76f" Namespace="calico-system" Pod="calico-kube-controllers-645f967f78-7knkm" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--kube--controllers--645f967f78--7knkm-eth0" May 17 00:09:58.894328 containerd[1478]: 2025-05-17 00:09:58.870 [INFO][4355] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e4691f2defc0e33fed4422e706ca0bfa4248b820561b8ae0221895b62213e76f" Namespace="calico-system" Pod="calico-kube-controllers-645f967f78-7knkm" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--kube--controllers--645f967f78--7knkm-eth0" May 17 00:09:58.894328 containerd[1478]: 2025-05-17 00:09:58.870 [INFO][4355] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e4691f2defc0e33fed4422e706ca0bfa4248b820561b8ae0221895b62213e76f" Namespace="calico-system" Pod="calico-kube-controllers-645f967f78-7knkm" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--kube--controllers--645f967f78--7knkm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3b0dbcbd78-k8s-calico--kube--controllers--645f967f78--7knkm-eth0", GenerateName:"calico-kube-controllers-645f967f78-", Namespace:"calico-system", SelfLink:"", UID:"e26e8c4a-c5e6-41d8-924e-19ce27b5f4cd", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"645f967f78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3b0dbcbd78", ContainerID:"e4691f2defc0e33fed4422e706ca0bfa4248b820561b8ae0221895b62213e76f", Pod:"calico-kube-controllers-645f967f78-7knkm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali18b610d7672", MAC:"06:7c:2e:9f:0f:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:09:58.894328 containerd[1478]: 2025-05-17 00:09:58.890 [INFO][4355] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e4691f2defc0e33fed4422e706ca0bfa4248b820561b8ae0221895b62213e76f" Namespace="calico-system" Pod="calico-kube-controllers-645f967f78-7knkm" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--kube--controllers--645f967f78--7knkm-eth0" May 17 00:09:58.914957 containerd[1478]: time="2025-05-17T00:09:58.914702674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:58.915638 containerd[1478]: time="2025-05-17T00:09:58.915567489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:58.915936 containerd[1478]: time="2025-05-17T00:09:58.915698852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:58.916510 containerd[1478]: time="2025-05-17T00:09:58.916363944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:58.958102 systemd[1]: Started cri-containerd-e4691f2defc0e33fed4422e706ca0bfa4248b820561b8ae0221895b62213e76f.scope - libcontainer container e4691f2defc0e33fed4422e706ca0bfa4248b820561b8ae0221895b62213e76f. May 17 00:09:58.981597 systemd-networkd[1361]: cali1e62d21233b: Link UP May 17 00:09:58.983478 systemd-networkd[1361]: cali1e62d21233b: Gained carrier May 17 00:09:59.012816 containerd[1478]: 2025-05-17 00:09:58.770 [INFO][4364] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--3b0dbcbd78-k8s-goldmane--78d55f7ddc--gp74j-eth0 goldmane-78d55f7ddc- calico-system 662625c5-921c-406a-9ef1-d2e70e33e339 931 0 2025-05-17 00:09:36 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-3-n-3b0dbcbd78 goldmane-78d55f7ddc-gp74j eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1e62d21233b [] [] }} ContainerID="4480f08a55ea7f16a9727a670d1d7c413bfacd407b0d80da07415f3715c5e018" Namespace="calico-system" Pod="goldmane-78d55f7ddc-gp74j" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-goldmane--78d55f7ddc--gp74j-" May 17 00:09:59.012816 containerd[1478]: 2025-05-17 00:09:58.771 [INFO][4364] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4480f08a55ea7f16a9727a670d1d7c413bfacd407b0d80da07415f3715c5e018" Namespace="calico-system" Pod="goldmane-78d55f7ddc-gp74j" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-goldmane--78d55f7ddc--gp74j-eth0" May 17 00:09:59.012816 containerd[1478]: 2025-05-17 00:09:58.819 [INFO][4384] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4480f08a55ea7f16a9727a670d1d7c413bfacd407b0d80da07415f3715c5e018" HandleID="k8s-pod-network.4480f08a55ea7f16a9727a670d1d7c413bfacd407b0d80da07415f3715c5e018" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-goldmane--78d55f7ddc--gp74j-eth0" May 17 00:09:59.012816 containerd[1478]: 2025-05-17 00:09:58.819 [INFO][4384] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4480f08a55ea7f16a9727a670d1d7c413bfacd407b0d80da07415f3715c5e018" HandleID="k8s-pod-network.4480f08a55ea7f16a9727a670d1d7c413bfacd407b0d80da07415f3715c5e018" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-goldmane--78d55f7ddc--gp74j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400022f260), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-3b0dbcbd78", "pod":"goldmane-78d55f7ddc-gp74j", "timestamp":"2025-05-17 00:09:58.819031345 +0000 UTC"}, Hostname:"ci-4081-3-3-n-3b0dbcbd78", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:09:59.012816 containerd[1478]: 2025-05-17 00:09:58.820 [INFO][4384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:09:59.012816 containerd[1478]: 2025-05-17 00:09:58.860 [INFO][4384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:09:59.012816 containerd[1478]: 2025-05-17 00:09:58.860 [INFO][4384] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-3b0dbcbd78' May 17 00:09:59.012816 containerd[1478]: 2025-05-17 00:09:58.921 [INFO][4384] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4480f08a55ea7f16a9727a670d1d7c413bfacd407b0d80da07415f3715c5e018" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:59.012816 containerd[1478]: 2025-05-17 00:09:58.930 [INFO][4384] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:59.012816 containerd[1478]: 2025-05-17 00:09:58.941 [INFO][4384] ipam/ipam.go 511: Trying affinity for 192.168.2.0/26 host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:59.012816 containerd[1478]: 2025-05-17 00:09:58.947 [INFO][4384] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.0/26 host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:59.012816 containerd[1478]: 2025-05-17 00:09:58.951 [INFO][4384] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:59.012816 containerd[1478]: 2025-05-17 00:09:58.952 [INFO][4384] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.4480f08a55ea7f16a9727a670d1d7c413bfacd407b0d80da07415f3715c5e018" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:59.012816 containerd[1478]: 2025-05-17 00:09:58.955 [INFO][4384] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4480f08a55ea7f16a9727a670d1d7c413bfacd407b0d80da07415f3715c5e018 May 17 00:09:59.012816 containerd[1478]: 2025-05-17 00:09:58.962 [INFO][4384] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.4480f08a55ea7f16a9727a670d1d7c413bfacd407b0d80da07415f3715c5e018" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:59.012816 containerd[1478]: 2025-05-17 00:09:58.972 [INFO][4384] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.5/26] block=192.168.2.0/26 handle="k8s-pod-network.4480f08a55ea7f16a9727a670d1d7c413bfacd407b0d80da07415f3715c5e018" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:59.012816 containerd[1478]: 2025-05-17 00:09:58.972 [INFO][4384] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.5/26] handle="k8s-pod-network.4480f08a55ea7f16a9727a670d1d7c413bfacd407b0d80da07415f3715c5e018" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:59.012816 containerd[1478]: 2025-05-17 00:09:58.972 [INFO][4384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:09:59.012816 containerd[1478]: 2025-05-17 00:09:58.972 [INFO][4384] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.5/26] IPv6=[] ContainerID="4480f08a55ea7f16a9727a670d1d7c413bfacd407b0d80da07415f3715c5e018" HandleID="k8s-pod-network.4480f08a55ea7f16a9727a670d1d7c413bfacd407b0d80da07415f3715c5e018" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-goldmane--78d55f7ddc--gp74j-eth0" May 17 00:09:59.013542 containerd[1478]: 2025-05-17 00:09:58.976 [INFO][4364] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4480f08a55ea7f16a9727a670d1d7c413bfacd407b0d80da07415f3715c5e018" Namespace="calico-system" Pod="goldmane-78d55f7ddc-gp74j" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-goldmane--78d55f7ddc--gp74j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3b0dbcbd78-k8s-goldmane--78d55f7ddc--gp74j-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"662625c5-921c-406a-9ef1-d2e70e33e339", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3b0dbcbd78", ContainerID:"", Pod:"goldmane-78d55f7ddc-gp74j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.2.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1e62d21233b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:09:59.013542 containerd[1478]: 2025-05-17 00:09:58.976 [INFO][4364] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.5/32] ContainerID="4480f08a55ea7f16a9727a670d1d7c413bfacd407b0d80da07415f3715c5e018" Namespace="calico-system" Pod="goldmane-78d55f7ddc-gp74j" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-goldmane--78d55f7ddc--gp74j-eth0" May 17 00:09:59.013542 containerd[1478]: 2025-05-17 00:09:58.976 [INFO][4364] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e62d21233b ContainerID="4480f08a55ea7f16a9727a670d1d7c413bfacd407b0d80da07415f3715c5e018" Namespace="calico-system" Pod="goldmane-78d55f7ddc-gp74j" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-goldmane--78d55f7ddc--gp74j-eth0" May 17 00:09:59.013542 containerd[1478]: 2025-05-17 00:09:58.980 [INFO][4364] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4480f08a55ea7f16a9727a670d1d7c413bfacd407b0d80da07415f3715c5e018" Namespace="calico-system" Pod="goldmane-78d55f7ddc-gp74j" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-goldmane--78d55f7ddc--gp74j-eth0" May 17 00:09:59.013542 containerd[1478]: 2025-05-17 00:09:58.982 [INFO][4364] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4480f08a55ea7f16a9727a670d1d7c413bfacd407b0d80da07415f3715c5e018" Namespace="calico-system" Pod="goldmane-78d55f7ddc-gp74j" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-goldmane--78d55f7ddc--gp74j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3b0dbcbd78-k8s-goldmane--78d55f7ddc--gp74j-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"662625c5-921c-406a-9ef1-d2e70e33e339", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3b0dbcbd78", ContainerID:"4480f08a55ea7f16a9727a670d1d7c413bfacd407b0d80da07415f3715c5e018", Pod:"goldmane-78d55f7ddc-gp74j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.2.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1e62d21233b", MAC:"c2:99:16:7b:2b:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:09:59.013542 containerd[1478]: 2025-05-17 00:09:59.008 [INFO][4364] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4480f08a55ea7f16a9727a670d1d7c413bfacd407b0d80da07415f3715c5e018" Namespace="calico-system" Pod="goldmane-78d55f7ddc-gp74j" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-goldmane--78d55f7ddc--gp74j-eth0" May 17 00:09:59.027028 containerd[1478]: time="2025-05-17T00:09:59.026474809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-645f967f78-7knkm,Uid:e26e8c4a-c5e6-41d8-924e-19ce27b5f4cd,Namespace:calico-system,Attempt:1,} returns sandbox id \"e4691f2defc0e33fed4422e706ca0bfa4248b820561b8ae0221895b62213e76f\"" May 17 00:09:59.047515 containerd[1478]: time="2025-05-17T00:09:59.047323739Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:59.047515 containerd[1478]: time="2025-05-17T00:09:59.047401740Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:59.047515 containerd[1478]: time="2025-05-17T00:09:59.047414660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:59.048171 containerd[1478]: time="2025-05-17T00:09:59.047538022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:59.071685 systemd[1]: Started cri-containerd-4480f08a55ea7f16a9727a670d1d7c413bfacd407b0d80da07415f3715c5e018.scope - libcontainer container 4480f08a55ea7f16a9727a670d1d7c413bfacd407b0d80da07415f3715c5e018. May 17 00:09:59.130417 containerd[1478]: time="2025-05-17T00:09:59.130303569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-gp74j,Uid:662625c5-921c-406a-9ef1-d2e70e33e339,Namespace:calico-system,Attempt:1,} returns sandbox id \"4480f08a55ea7f16a9727a670d1d7c413bfacd407b0d80da07415f3715c5e018\"" May 17 00:09:59.300684 systemd-networkd[1361]: cali039ce6022f9: Gained IPv6LL May 17 00:09:59.521051 containerd[1478]: time="2025-05-17T00:09:59.520667366Z" level=info msg="StopPodSandbox for \"18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a\"" May 17 00:09:59.522455 containerd[1478]: time="2025-05-17T00:09:59.522402197Z" level=info msg="StopPodSandbox for \"64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4\"" May 17 00:09:59.640552 containerd[1478]: 2025-05-17 00:09:59.582 [INFO][4514] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" May 17 00:09:59.640552 containerd[1478]: 2025-05-17 00:09:59.582 [INFO][4514] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" iface="eth0" netns="/var/run/netns/cni-84083e76-3375-5845-1d65-f01d8041fff0" May 17 00:09:59.640552 containerd[1478]: 2025-05-17 00:09:59.582 [INFO][4514] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" iface="eth0" netns="/var/run/netns/cni-84083e76-3375-5845-1d65-f01d8041fff0" May 17 00:09:59.640552 containerd[1478]: 2025-05-17 00:09:59.583 [INFO][4514] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" iface="eth0" netns="/var/run/netns/cni-84083e76-3375-5845-1d65-f01d8041fff0" May 17 00:09:59.640552 containerd[1478]: 2025-05-17 00:09:59.583 [INFO][4514] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" May 17 00:09:59.640552 containerd[1478]: 2025-05-17 00:09:59.583 [INFO][4514] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" May 17 00:09:59.640552 containerd[1478]: 2025-05-17 00:09:59.622 [INFO][4528] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" HandleID="k8s-pod-network.18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--lpjvt-eth0" May 17 00:09:59.640552 containerd[1478]: 2025-05-17 00:09:59.622 [INFO][4528] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:09:59.640552 containerd[1478]: 2025-05-17 00:09:59.622 [INFO][4528] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:09:59.640552 containerd[1478]: 2025-05-17 00:09:59.633 [WARNING][4528] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" HandleID="k8s-pod-network.18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--lpjvt-eth0" May 17 00:09:59.640552 containerd[1478]: 2025-05-17 00:09:59.633 [INFO][4528] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" HandleID="k8s-pod-network.18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--lpjvt-eth0" May 17 00:09:59.640552 containerd[1478]: 2025-05-17 00:09:59.635 [INFO][4528] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:09:59.640552 containerd[1478]: 2025-05-17 00:09:59.637 [INFO][4514] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" May 17 00:09:59.642143 containerd[1478]: time="2025-05-17T00:09:59.640707893Z" level=info msg="TearDown network for sandbox \"18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a\" successfully" May 17 00:09:59.642143 containerd[1478]: time="2025-05-17T00:09:59.640737534Z" level=info msg="StopPodSandbox for \"18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a\" returns successfully" May 17 00:09:59.642311 containerd[1478]: time="2025-05-17T00:09:59.642209920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654599565b-lpjvt,Uid:5021f4d9-7c3c-411f-be83-7378c09cd50b,Namespace:calico-apiserver,Attempt:1,}" May 17 00:09:59.679875 containerd[1478]: 2025-05-17 00:09:59.590 [INFO][4510] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" May 17 00:09:59.679875 containerd[1478]: 2025-05-17 00:09:59.591 [INFO][4510] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" iface="eth0" netns="/var/run/netns/cni-210f6e0b-d1b3-45cb-a25c-f406a11c0e87" May 17 00:09:59.679875 containerd[1478]: 2025-05-17 00:09:59.591 [INFO][4510] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" iface="eth0" netns="/var/run/netns/cni-210f6e0b-d1b3-45cb-a25c-f406a11c0e87" May 17 00:09:59.679875 containerd[1478]: 2025-05-17 00:09:59.591 [INFO][4510] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" iface="eth0" netns="/var/run/netns/cni-210f6e0b-d1b3-45cb-a25c-f406a11c0e87" May 17 00:09:59.679875 containerd[1478]: 2025-05-17 00:09:59.591 [INFO][4510] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" May 17 00:09:59.679875 containerd[1478]: 2025-05-17 00:09:59.591 [INFO][4510] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" May 17 00:09:59.679875 containerd[1478]: 2025-05-17 00:09:59.632 [INFO][4533] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" HandleID="k8s-pod-network.64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--2pp2r-eth0" May 17 00:09:59.679875 containerd[1478]: 2025-05-17 00:09:59.633 [INFO][4533] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:09:59.679875 containerd[1478]: 2025-05-17 00:09:59.635 [INFO][4533] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:09:59.679875 containerd[1478]: 2025-05-17 00:09:59.650 [WARNING][4533] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" HandleID="k8s-pod-network.64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--2pp2r-eth0" May 17 00:09:59.679875 containerd[1478]: 2025-05-17 00:09:59.650 [INFO][4533] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" HandleID="k8s-pod-network.64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--2pp2r-eth0" May 17 00:09:59.679875 containerd[1478]: 2025-05-17 00:09:59.653 [INFO][4533] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:09:59.679875 containerd[1478]: 2025-05-17 00:09:59.659 [INFO][4510] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" May 17 00:09:59.682495 containerd[1478]: time="2025-05-17T00:09:59.681955504Z" level=info msg="TearDown network for sandbox \"64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4\" successfully" May 17 00:09:59.682495 containerd[1478]: time="2025-05-17T00:09:59.681991985Z" level=info msg="StopPodSandbox for \"64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4\" returns successfully" May 17 00:09:59.682801 containerd[1478]: time="2025-05-17T00:09:59.682758479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2pp2r,Uid:6123d484-fe1b-4bcd-a0b1-5639f75795e9,Namespace:kube-system,Attempt:1,}" May 17 00:09:59.702350 systemd[1]: run-netns-cni\x2d84083e76\x2d3375\x2d5845\x2d1d65\x2df01d8041fff0.mount: Deactivated successfully. May 17 00:09:59.702856 systemd[1]: run-netns-cni\x2d210f6e0b\x2dd1b3\x2d45cb\x2da25c\x2df406a11c0e87.mount: Deactivated successfully. May 17 00:09:59.843138 systemd-networkd[1361]: calif7019556d94: Link UP May 17 00:09:59.843353 systemd-networkd[1361]: calif7019556d94: Gained carrier May 17 00:09:59.870905 containerd[1478]: 2025-05-17 00:09:59.731 [INFO][4543] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--lpjvt-eth0 calico-apiserver-654599565b- calico-apiserver 5021f4d9-7c3c-411f-be83-7378c09cd50b 947 0 2025-05-17 00:09:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:654599565b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-n-3b0dbcbd78 calico-apiserver-654599565b-lpjvt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif7019556d94 [] [] }} ContainerID="0183b29175c54aeb2849746f6b658eea615ed606e0fdf58e662125f0d99bf3e3" Namespace="calico-apiserver" Pod="calico-apiserver-654599565b-lpjvt" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--lpjvt-" May 17 00:09:59.870905 containerd[1478]: 2025-05-17 00:09:59.731 [INFO][4543] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0183b29175c54aeb2849746f6b658eea615ed606e0fdf58e662125f0d99bf3e3" Namespace="calico-apiserver" Pod="calico-apiserver-654599565b-lpjvt" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--lpjvt-eth0" May 17 00:09:59.870905 containerd[1478]: 2025-05-17 00:09:59.767 [INFO][4568] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0183b29175c54aeb2849746f6b658eea615ed606e0fdf58e662125f0d99bf3e3" HandleID="k8s-pod-network.0183b29175c54aeb2849746f6b658eea615ed606e0fdf58e662125f0d99bf3e3" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--lpjvt-eth0" May 17 00:09:59.870905 containerd[1478]: 2025-05-17 00:09:59.767 [INFO][4568] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0183b29175c54aeb2849746f6b658eea615ed606e0fdf58e662125f0d99bf3e3" HandleID="k8s-pod-network.0183b29175c54aeb2849746f6b658eea615ed606e0fdf58e662125f0d99bf3e3" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--lpjvt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d72a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-n-3b0dbcbd78", "pod":"calico-apiserver-654599565b-lpjvt", "timestamp":"2025-05-17 00:09:59.767112773 +0000 UTC"}, Hostname:"ci-4081-3-3-n-3b0dbcbd78", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:09:59.870905 containerd[1478]: 2025-05-17 00:09:59.767 [INFO][4568] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:09:59.870905 containerd[1478]: 2025-05-17 00:09:59.767 [INFO][4568] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:09:59.870905 containerd[1478]: 2025-05-17 00:09:59.767 [INFO][4568] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-3b0dbcbd78' May 17 00:09:59.870905 containerd[1478]: 2025-05-17 00:09:59.784 [INFO][4568] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0183b29175c54aeb2849746f6b658eea615ed606e0fdf58e662125f0d99bf3e3" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:59.870905 containerd[1478]: 2025-05-17 00:09:59.792 [INFO][4568] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:59.870905 containerd[1478]: 2025-05-17 00:09:59.802 [INFO][4568] ipam/ipam.go 511: Trying affinity for 192.168.2.0/26 host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:59.870905 containerd[1478]: 2025-05-17 00:09:59.806 [INFO][4568] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.0/26 host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:59.870905 containerd[1478]: 2025-05-17 00:09:59.810 [INFO][4568] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:59.870905 containerd[1478]: 2025-05-17 00:09:59.811 [INFO][4568] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.0183b29175c54aeb2849746f6b658eea615ed606e0fdf58e662125f0d99bf3e3" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:59.870905 containerd[1478]: 2025-05-17 00:09:59.813 [INFO][4568] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0183b29175c54aeb2849746f6b658eea615ed606e0fdf58e662125f0d99bf3e3 May 17 00:09:59.870905 containerd[1478]: 2025-05-17 00:09:59.822 [INFO][4568] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.0183b29175c54aeb2849746f6b658eea615ed606e0fdf58e662125f0d99bf3e3" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:59.870905 containerd[1478]: 2025-05-17 00:09:59.831 [INFO][4568] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.6/26] block=192.168.2.0/26 handle="k8s-pod-network.0183b29175c54aeb2849746f6b658eea615ed606e0fdf58e662125f0d99bf3e3" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:59.870905 containerd[1478]: 2025-05-17 00:09:59.832 [INFO][4568] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.6/26] handle="k8s-pod-network.0183b29175c54aeb2849746f6b658eea615ed606e0fdf58e662125f0d99bf3e3" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:09:59.870905 containerd[1478]: 2025-05-17 00:09:59.832 [INFO][4568] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:09:59.870905 containerd[1478]: 2025-05-17 00:09:59.832 [INFO][4568] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.6/26] IPv6=[] ContainerID="0183b29175c54aeb2849746f6b658eea615ed606e0fdf58e662125f0d99bf3e3" HandleID="k8s-pod-network.0183b29175c54aeb2849746f6b658eea615ed606e0fdf58e662125f0d99bf3e3" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--lpjvt-eth0" May 17 00:09:59.871476 containerd[1478]: 2025-05-17 00:09:59.839 [INFO][4543] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0183b29175c54aeb2849746f6b658eea615ed606e0fdf58e662125f0d99bf3e3" Namespace="calico-apiserver" Pod="calico-apiserver-654599565b-lpjvt" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--lpjvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--lpjvt-eth0", GenerateName:"calico-apiserver-654599565b-", Namespace:"calico-apiserver", SelfLink:"", UID:"5021f4d9-7c3c-411f-be83-7378c09cd50b", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"654599565b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3b0dbcbd78", ContainerID:"", Pod:"calico-apiserver-654599565b-lpjvt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif7019556d94", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:09:59.871476 containerd[1478]: 2025-05-17 00:09:59.839 [INFO][4543] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.6/32] ContainerID="0183b29175c54aeb2849746f6b658eea615ed606e0fdf58e662125f0d99bf3e3" Namespace="calico-apiserver" Pod="calico-apiserver-654599565b-lpjvt" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--lpjvt-eth0" May 17 00:09:59.871476 containerd[1478]: 2025-05-17 00:09:59.839 [INFO][4543] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7019556d94 ContainerID="0183b29175c54aeb2849746f6b658eea615ed606e0fdf58e662125f0d99bf3e3" Namespace="calico-apiserver" Pod="calico-apiserver-654599565b-lpjvt" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--lpjvt-eth0" May 17 00:09:59.871476 containerd[1478]: 2025-05-17 00:09:59.842 [INFO][4543] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0183b29175c54aeb2849746f6b658eea615ed606e0fdf58e662125f0d99bf3e3" Namespace="calico-apiserver" Pod="calico-apiserver-654599565b-lpjvt" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--lpjvt-eth0" May 17 00:09:59.871476 containerd[1478]: 2025-05-17 00:09:59.846 [INFO][4543] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0183b29175c54aeb2849746f6b658eea615ed606e0fdf58e662125f0d99bf3e3" Namespace="calico-apiserver" Pod="calico-apiserver-654599565b-lpjvt" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--lpjvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--lpjvt-eth0", GenerateName:"calico-apiserver-654599565b-", Namespace:"calico-apiserver", SelfLink:"", UID:"5021f4d9-7c3c-411f-be83-7378c09cd50b", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"654599565b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3b0dbcbd78", ContainerID:"0183b29175c54aeb2849746f6b658eea615ed606e0fdf58e662125f0d99bf3e3", Pod:"calico-apiserver-654599565b-lpjvt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif7019556d94", MAC:"56:e7:f7:52:7b:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:09:59.871476 containerd[1478]: 2025-05-17 00:09:59.867 [INFO][4543] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0183b29175c54aeb2849746f6b658eea615ed606e0fdf58e662125f0d99bf3e3" Namespace="calico-apiserver" Pod="calico-apiserver-654599565b-lpjvt" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--lpjvt-eth0" May 17 00:09:59.908101 containerd[1478]: time="2025-05-17T00:09:59.905503106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:59.908619 containerd[1478]: time="2025-05-17T00:09:59.908019190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:59.908619 containerd[1478]: time="2025-05-17T00:09:59.908102232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:59.908619 containerd[1478]: time="2025-05-17T00:09:59.908524039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:59.950664 systemd[1]: Started cri-containerd-0183b29175c54aeb2849746f6b658eea615ed606e0fdf58e662125f0d99bf3e3.scope - libcontainer container 0183b29175c54aeb2849746f6b658eea615ed606e0fdf58e662125f0d99bf3e3. May 17 00:09:59.974705 systemd-networkd[1361]: cali0336039be35: Link UP May 17 00:09:59.975732 systemd-networkd[1361]: cali0336039be35: Gained carrier May 17 00:10:00.005997 containerd[1478]: 2025-05-17 00:09:59.772 [INFO][4555] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--2pp2r-eth0 coredns-674b8bbfcf- kube-system 6123d484-fe1b-4bcd-a0b1-5639f75795e9 948 0 2025-05-17 00:09:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-n-3b0dbcbd78 coredns-674b8bbfcf-2pp2r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0336039be35 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="60ab39a3cfe8e18880a85bd01fbed5dc68e692c73afbde601fecc06db3f86d78" Namespace="kube-system" Pod="coredns-674b8bbfcf-2pp2r" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--2pp2r-" May 17 00:10:00.005997 containerd[1478]: 2025-05-17 00:09:59.772 [INFO][4555] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="60ab39a3cfe8e18880a85bd01fbed5dc68e692c73afbde601fecc06db3f86d78" Namespace="kube-system" Pod="coredns-674b8bbfcf-2pp2r" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--2pp2r-eth0" May 17 00:10:00.005997 containerd[1478]: 2025-05-17 00:09:59.819 [INFO][4577] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60ab39a3cfe8e18880a85bd01fbed5dc68e692c73afbde601fecc06db3f86d78" HandleID="k8s-pod-network.60ab39a3cfe8e18880a85bd01fbed5dc68e692c73afbde601fecc06db3f86d78" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--2pp2r-eth0" May 17 00:10:00.005997 containerd[1478]: 2025-05-17 00:09:59.819 [INFO][4577] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="60ab39a3cfe8e18880a85bd01fbed5dc68e692c73afbde601fecc06db3f86d78" HandleID="k8s-pod-network.60ab39a3cfe8e18880a85bd01fbed5dc68e692c73afbde601fecc06db3f86d78" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--2pp2r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d7020), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-n-3b0dbcbd78", "pod":"coredns-674b8bbfcf-2pp2r", "timestamp":"2025-05-17 00:09:59.819201056 +0000 UTC"}, Hostname:"ci-4081-3-3-n-3b0dbcbd78", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:10:00.005997 containerd[1478]: 2025-05-17 00:09:59.819 [INFO][4577] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:00.005997 containerd[1478]: 2025-05-17 00:09:59.832 [INFO][4577] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:00.005997 containerd[1478]: 2025-05-17 00:09:59.833 [INFO][4577] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-3b0dbcbd78' May 17 00:10:00.005997 containerd[1478]: 2025-05-17 00:09:59.886 [INFO][4577] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.60ab39a3cfe8e18880a85bd01fbed5dc68e692c73afbde601fecc06db3f86d78" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:10:00.005997 containerd[1478]: 2025-05-17 00:09:59.906 [INFO][4577] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:10:00.005997 containerd[1478]: 2025-05-17 00:09:59.922 [INFO][4577] ipam/ipam.go 511: Trying affinity for 192.168.2.0/26 host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:10:00.005997 containerd[1478]: 2025-05-17 00:09:59.925 [INFO][4577] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.0/26 host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:10:00.005997 containerd[1478]: 2025-05-17 00:09:59.936 [INFO][4577] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:10:00.005997 containerd[1478]: 2025-05-17 00:09:59.942 [INFO][4577] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.60ab39a3cfe8e18880a85bd01fbed5dc68e692c73afbde601fecc06db3f86d78" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:10:00.005997 containerd[1478]: 2025-05-17 00:09:59.946 [INFO][4577] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.60ab39a3cfe8e18880a85bd01fbed5dc68e692c73afbde601fecc06db3f86d78 May 17 00:10:00.005997 containerd[1478]: 2025-05-17 00:09:59.953 [INFO][4577] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.60ab39a3cfe8e18880a85bd01fbed5dc68e692c73afbde601fecc06db3f86d78" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:10:00.005997 containerd[1478]: 2025-05-17 00:09:59.963 [INFO][4577] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.7/26] block=192.168.2.0/26 handle="k8s-pod-network.60ab39a3cfe8e18880a85bd01fbed5dc68e692c73afbde601fecc06db3f86d78" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:10:00.005997 containerd[1478]: 2025-05-17 00:09:59.963 [INFO][4577] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.7/26] handle="k8s-pod-network.60ab39a3cfe8e18880a85bd01fbed5dc68e692c73afbde601fecc06db3f86d78" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:10:00.005997 containerd[1478]: 2025-05-17 00:09:59.963 [INFO][4577] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:00.005997 containerd[1478]: 2025-05-17 00:09:59.963 [INFO][4577] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.7/26] IPv6=[] ContainerID="60ab39a3cfe8e18880a85bd01fbed5dc68e692c73afbde601fecc06db3f86d78" HandleID="k8s-pod-network.60ab39a3cfe8e18880a85bd01fbed5dc68e692c73afbde601fecc06db3f86d78" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--2pp2r-eth0" May 17 00:10:00.006577 containerd[1478]: 2025-05-17 00:09:59.966 [INFO][4555] cni-plugin/k8s.go 418: Populated endpoint ContainerID="60ab39a3cfe8e18880a85bd01fbed5dc68e692c73afbde601fecc06db3f86d78" Namespace="kube-system" Pod="coredns-674b8bbfcf-2pp2r" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--2pp2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--2pp2r-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6123d484-fe1b-4bcd-a0b1-5639f75795e9", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3b0dbcbd78", ContainerID:"", Pod:"coredns-674b8bbfcf-2pp2r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0336039be35", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:00.006577 containerd[1478]: 2025-05-17 00:09:59.967 [INFO][4555] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.7/32] ContainerID="60ab39a3cfe8e18880a85bd01fbed5dc68e692c73afbde601fecc06db3f86d78" Namespace="kube-system" Pod="coredns-674b8bbfcf-2pp2r" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--2pp2r-eth0" May 17 00:10:00.006577 containerd[1478]: 2025-05-17 00:09:59.967 [INFO][4555] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0336039be35 ContainerID="60ab39a3cfe8e18880a85bd01fbed5dc68e692c73afbde601fecc06db3f86d78" Namespace="kube-system" Pod="coredns-674b8bbfcf-2pp2r" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--2pp2r-eth0" May 17 00:10:00.006577 containerd[1478]: 2025-05-17 00:09:59.977 [INFO][4555] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60ab39a3cfe8e18880a85bd01fbed5dc68e692c73afbde601fecc06db3f86d78" Namespace="kube-system" Pod="coredns-674b8bbfcf-2pp2r" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--2pp2r-eth0" May 17 00:10:00.006577 containerd[1478]: 2025-05-17 00:09:59.977 [INFO][4555] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="60ab39a3cfe8e18880a85bd01fbed5dc68e692c73afbde601fecc06db3f86d78" Namespace="kube-system" Pod="coredns-674b8bbfcf-2pp2r" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--2pp2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--2pp2r-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6123d484-fe1b-4bcd-a0b1-5639f75795e9", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3b0dbcbd78", ContainerID:"60ab39a3cfe8e18880a85bd01fbed5dc68e692c73afbde601fecc06db3f86d78", Pod:"coredns-674b8bbfcf-2pp2r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0336039be35", MAC:"26:1d:d7:8d:09:e9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:00.006577 containerd[1478]: 2025-05-17 00:10:00.003 [INFO][4555] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="60ab39a3cfe8e18880a85bd01fbed5dc68e692c73afbde601fecc06db3f86d78" Namespace="kube-system" Pod="coredns-674b8bbfcf-2pp2r" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--2pp2r-eth0" May 17 00:10:00.010027 systemd-networkd[1361]: cali1e62d21233b: Gained IPv6LL May 17 00:10:00.032536 containerd[1478]: time="2025-05-17T00:10:00.031428499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654599565b-lpjvt,Uid:5021f4d9-7c3c-411f-be83-7378c09cd50b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0183b29175c54aeb2849746f6b658eea615ed606e0fdf58e662125f0d99bf3e3\"" May 17 00:10:00.041540 containerd[1478]: time="2025-05-17T00:10:00.041121072Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:10:00.041540 containerd[1478]: time="2025-05-17T00:10:00.041178793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:10:00.041540 containerd[1478]: time="2025-05-17T00:10:00.041190033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:10:00.041540 containerd[1478]: time="2025-05-17T00:10:00.041267874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:10:00.066682 systemd[1]: Started cri-containerd-60ab39a3cfe8e18880a85bd01fbed5dc68e692c73afbde601fecc06db3f86d78.scope - libcontainer container 60ab39a3cfe8e18880a85bd01fbed5dc68e692c73afbde601fecc06db3f86d78. May 17 00:10:00.068629 systemd-networkd[1361]: cali0056da4d310: Gained IPv6LL May 17 00:10:00.113007 containerd[1478]: time="2025-05-17T00:10:00.112733985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2pp2r,Uid:6123d484-fe1b-4bcd-a0b1-5639f75795e9,Namespace:kube-system,Attempt:1,} returns sandbox id \"60ab39a3cfe8e18880a85bd01fbed5dc68e692c73afbde601fecc06db3f86d78\"" May 17 00:10:00.124038 containerd[1478]: time="2025-05-17T00:10:00.123407455Z" level=info msg="CreateContainer within sandbox \"60ab39a3cfe8e18880a85bd01fbed5dc68e692c73afbde601fecc06db3f86d78\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:10:00.137510 containerd[1478]: time="2025-05-17T00:10:00.137456865Z" level=info msg="CreateContainer within sandbox \"60ab39a3cfe8e18880a85bd01fbed5dc68e692c73afbde601fecc06db3f86d78\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8648c7ae418d869903a1605c47182e2f7e672d373af1f78abbf128c7a221c159\"" May 17 00:10:00.139302 containerd[1478]: time="2025-05-17T00:10:00.138300840Z" level=info msg="StartContainer for \"8648c7ae418d869903a1605c47182e2f7e672d373af1f78abbf128c7a221c159\"" May 17 00:10:00.168767 systemd[1]: Started cri-containerd-8648c7ae418d869903a1605c47182e2f7e672d373af1f78abbf128c7a221c159.scope - libcontainer container 8648c7ae418d869903a1605c47182e2f7e672d373af1f78abbf128c7a221c159. May 17 00:10:00.206336 containerd[1478]: time="2025-05-17T00:10:00.206285529Z" level=info msg="StartContainer for \"8648c7ae418d869903a1605c47182e2f7e672d373af1f78abbf128c7a221c159\" returns successfully" May 17 00:10:00.516806 systemd-networkd[1361]: cali18b610d7672: Gained IPv6LL May 17 00:10:00.521328 containerd[1478]: time="2025-05-17T00:10:00.519923467Z" level=info msg="StopPodSandbox for \"acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633\"" May 17 00:10:00.630417 containerd[1478]: 2025-05-17 00:10:00.578 [INFO][4731] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" May 17 00:10:00.630417 containerd[1478]: 2025-05-17 00:10:00.578 [INFO][4731] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" iface="eth0" netns="/var/run/netns/cni-7bd71031-807b-879a-3ca8-7d1663b24eee" May 17 00:10:00.630417 containerd[1478]: 2025-05-17 00:10:00.579 [INFO][4731] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" iface="eth0" netns="/var/run/netns/cni-7bd71031-807b-879a-3ca8-7d1663b24eee" May 17 00:10:00.630417 containerd[1478]: 2025-05-17 00:10:00.581 [INFO][4731] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" iface="eth0" netns="/var/run/netns/cni-7bd71031-807b-879a-3ca8-7d1663b24eee" May 17 00:10:00.630417 containerd[1478]: 2025-05-17 00:10:00.581 [INFO][4731] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" May 17 00:10:00.630417 containerd[1478]: 2025-05-17 00:10:00.581 [INFO][4731] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" May 17 00:10:00.630417 containerd[1478]: 2025-05-17 00:10:00.609 [INFO][4738] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" HandleID="k8s-pod-network.acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--5rmrz-eth0" May 17 00:10:00.630417 containerd[1478]: 2025-05-17 00:10:00.609 [INFO][4738] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:00.630417 containerd[1478]: 2025-05-17 00:10:00.609 [INFO][4738] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:00.630417 containerd[1478]: 2025-05-17 00:10:00.623 [WARNING][4738] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" HandleID="k8s-pod-network.acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--5rmrz-eth0" May 17 00:10:00.630417 containerd[1478]: 2025-05-17 00:10:00.623 [INFO][4738] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" HandleID="k8s-pod-network.acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--5rmrz-eth0" May 17 00:10:00.630417 containerd[1478]: 2025-05-17 00:10:00.626 [INFO][4738] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:00.630417 containerd[1478]: 2025-05-17 00:10:00.628 [INFO][4731] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" May 17 00:10:00.631521 containerd[1478]: time="2025-05-17T00:10:00.630933002Z" level=info msg="TearDown network for sandbox \"acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633\" successfully" May 17 00:10:00.631521 containerd[1478]: time="2025-05-17T00:10:00.630968562Z" level=info msg="StopPodSandbox for \"acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633\" returns successfully" May 17 00:10:00.631917 containerd[1478]: time="2025-05-17T00:10:00.631888699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5rmrz,Uid:b9015c7c-53fd-4356-9465-8e7b4b00eae6,Namespace:kube-system,Attempt:1,}" May 17 00:10:00.711969 systemd[1]: run-netns-cni\x2d7bd71031\x2d807b\x2d879a\x2d3ca8\x2d7d1663b24eee.mount: Deactivated successfully. May 17 00:10:00.849395 systemd-networkd[1361]: cali6adb06cedb7: Link UP May 17 00:10:00.857120 systemd-networkd[1361]: cali6adb06cedb7: Gained carrier May 17 00:10:00.884499 kubelet[2663]: I0517 00:10:00.884160 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2pp2r" podStartSLOduration=38.884142185 podStartE2EDuration="38.884142185s" podCreationTimestamp="2025-05-17 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:10:00.844710124 +0000 UTC m=+45.434552777" watchObservedRunningTime="2025-05-17 00:10:00.884142185 +0000 UTC m=+45.473984838" May 17 00:10:00.889987 containerd[1478]: 2025-05-17 00:10:00.718 [INFO][4746] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--5rmrz-eth0 coredns-674b8bbfcf- kube-system b9015c7c-53fd-4356-9465-8e7b4b00eae6 960 0 2025-05-17 00:09:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-n-3b0dbcbd78 coredns-674b8bbfcf-5rmrz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6adb06cedb7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2c318a2973455c1f54e469390ebf7a4cd0866c5914bc578eb0d877cd1d83d319" Namespace="kube-system" Pod="coredns-674b8bbfcf-5rmrz" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--5rmrz-" May 17 00:10:00.889987 containerd[1478]: 2025-05-17 00:10:00.718 [INFO][4746] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2c318a2973455c1f54e469390ebf7a4cd0866c5914bc578eb0d877cd1d83d319" Namespace="kube-system" Pod="coredns-674b8bbfcf-5rmrz" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--5rmrz-eth0" May 17 00:10:00.889987 containerd[1478]: 2025-05-17 00:10:00.756 [INFO][4758] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2c318a2973455c1f54e469390ebf7a4cd0866c5914bc578eb0d877cd1d83d319" HandleID="k8s-pod-network.2c318a2973455c1f54e469390ebf7a4cd0866c5914bc578eb0d877cd1d83d319" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--5rmrz-eth0" May 17 00:10:00.889987 containerd[1478]: 2025-05-17 00:10:00.757 [INFO][4758] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2c318a2973455c1f54e469390ebf7a4cd0866c5914bc578eb0d877cd1d83d319" HandleID="k8s-pod-network.2c318a2973455c1f54e469390ebf7a4cd0866c5914bc578eb0d877cd1d83d319" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--5rmrz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400022f130), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-n-3b0dbcbd78", "pod":"coredns-674b8bbfcf-5rmrz", "timestamp":"2025-05-17 00:10:00.756885722 +0000 UTC"}, Hostname:"ci-4081-3-3-n-3b0dbcbd78", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:10:00.889987 containerd[1478]: 2025-05-17 00:10:00.757 [INFO][4758] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:00.889987 containerd[1478]: 2025-05-17 00:10:00.757 [INFO][4758] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:00.889987 containerd[1478]: 2025-05-17 00:10:00.757 [INFO][4758] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-3b0dbcbd78' May 17 00:10:00.889987 containerd[1478]: 2025-05-17 00:10:00.769 [INFO][4758] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2c318a2973455c1f54e469390ebf7a4cd0866c5914bc578eb0d877cd1d83d319" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:10:00.889987 containerd[1478]: 2025-05-17 00:10:00.778 [INFO][4758] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:10:00.889987 containerd[1478]: 2025-05-17 00:10:00.787 [INFO][4758] ipam/ipam.go 511: Trying affinity for 192.168.2.0/26 host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:10:00.889987 containerd[1478]: 2025-05-17 00:10:00.790 [INFO][4758] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.0/26 host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:10:00.889987 containerd[1478]: 2025-05-17 00:10:00.794 [INFO][4758] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:10:00.889987 containerd[1478]: 2025-05-17 00:10:00.794 [INFO][4758] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.2c318a2973455c1f54e469390ebf7a4cd0866c5914bc578eb0d877cd1d83d319" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:10:00.889987 containerd[1478]: 2025-05-17 00:10:00.796 [INFO][4758] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2c318a2973455c1f54e469390ebf7a4cd0866c5914bc578eb0d877cd1d83d319 May 17 00:10:00.889987 containerd[1478]: 2025-05-17 00:10:00.802 [INFO][4758] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.2c318a2973455c1f54e469390ebf7a4cd0866c5914bc578eb0d877cd1d83d319" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:10:00.889987 containerd[1478]: 2025-05-17 00:10:00.820 [INFO][4758] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.8/26] block=192.168.2.0/26 handle="k8s-pod-network.2c318a2973455c1f54e469390ebf7a4cd0866c5914bc578eb0d877cd1d83d319" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:10:00.889987 containerd[1478]: 2025-05-17 00:10:00.820 [INFO][4758] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.8/26] handle="k8s-pod-network.2c318a2973455c1f54e469390ebf7a4cd0866c5914bc578eb0d877cd1d83d319" host="ci-4081-3-3-n-3b0dbcbd78" May 17 00:10:00.889987 containerd[1478]: 2025-05-17 00:10:00.820 [INFO][4758] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:00.889987 containerd[1478]: 2025-05-17 00:10:00.820 [INFO][4758] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.8/26] IPv6=[] ContainerID="2c318a2973455c1f54e469390ebf7a4cd0866c5914bc578eb0d877cd1d83d319" HandleID="k8s-pod-network.2c318a2973455c1f54e469390ebf7a4cd0866c5914bc578eb0d877cd1d83d319" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--5rmrz-eth0" May 17 00:10:00.891363 containerd[1478]: 2025-05-17 00:10:00.830 [INFO][4746] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2c318a2973455c1f54e469390ebf7a4cd0866c5914bc578eb0d877cd1d83d319" Namespace="kube-system" Pod="coredns-674b8bbfcf-5rmrz" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--5rmrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--5rmrz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b9015c7c-53fd-4356-9465-8e7b4b00eae6", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3b0dbcbd78", ContainerID:"", Pod:"coredns-674b8bbfcf-5rmrz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6adb06cedb7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:00.891363 containerd[1478]: 2025-05-17 00:10:00.830 [INFO][4746] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.8/32] ContainerID="2c318a2973455c1f54e469390ebf7a4cd0866c5914bc578eb0d877cd1d83d319" Namespace="kube-system" Pod="coredns-674b8bbfcf-5rmrz" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--5rmrz-eth0" May 17 00:10:00.891363 containerd[1478]: 2025-05-17 00:10:00.830 [INFO][4746] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6adb06cedb7 ContainerID="2c318a2973455c1f54e469390ebf7a4cd0866c5914bc578eb0d877cd1d83d319" Namespace="kube-system" Pod="coredns-674b8bbfcf-5rmrz" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--5rmrz-eth0" May 17 00:10:00.891363 containerd[1478]: 2025-05-17 00:10:00.867 [INFO][4746] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2c318a2973455c1f54e469390ebf7a4cd0866c5914bc578eb0d877cd1d83d319" Namespace="kube-system" Pod="coredns-674b8bbfcf-5rmrz" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--5rmrz-eth0" May 17 00:10:00.891363 containerd[1478]: 2025-05-17 00:10:00.868 [INFO][4746] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2c318a2973455c1f54e469390ebf7a4cd0866c5914bc578eb0d877cd1d83d319" Namespace="kube-system" Pod="coredns-674b8bbfcf-5rmrz" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--5rmrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--5rmrz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b9015c7c-53fd-4356-9465-8e7b4b00eae6", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3b0dbcbd78", ContainerID:"2c318a2973455c1f54e469390ebf7a4cd0866c5914bc578eb0d877cd1d83d319", Pod:"coredns-674b8bbfcf-5rmrz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6adb06cedb7", MAC:"7a:d4:78:a1:68:7d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:00.891363 containerd[1478]: 2025-05-17 00:10:00.885 [INFO][4746] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2c318a2973455c1f54e469390ebf7a4cd0866c5914bc578eb0d877cd1d83d319" Namespace="kube-system" Pod="coredns-674b8bbfcf-5rmrz" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--5rmrz-eth0" May 17 00:10:00.901487 systemd-networkd[1361]: calif7019556d94: Gained IPv6LL May 17 00:10:00.930202 containerd[1478]: time="2025-05-17T00:10:00.929135145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:10:00.930202 containerd[1478]: time="2025-05-17T00:10:00.929194667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:10:00.930202 containerd[1478]: time="2025-05-17T00:10:00.929210547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:10:00.930202 containerd[1478]: time="2025-05-17T00:10:00.929292828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:10:00.975806 systemd[1]: Started cri-containerd-2c318a2973455c1f54e469390ebf7a4cd0866c5914bc578eb0d877cd1d83d319.scope - libcontainer container 2c318a2973455c1f54e469390ebf7a4cd0866c5914bc578eb0d877cd1d83d319. May 17 00:10:01.028601 systemd-networkd[1361]: cali0336039be35: Gained IPv6LL May 17 00:10:01.042948 containerd[1478]: time="2025-05-17T00:10:01.042470164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5rmrz,Uid:b9015c7c-53fd-4356-9465-8e7b4b00eae6,Namespace:kube-system,Attempt:1,} returns sandbox id \"2c318a2973455c1f54e469390ebf7a4cd0866c5914bc578eb0d877cd1d83d319\"" May 17 00:10:01.053064 containerd[1478]: time="2025-05-17T00:10:01.052992272Z" level=info msg="CreateContainer within sandbox \"2c318a2973455c1f54e469390ebf7a4cd0866c5914bc578eb0d877cd1d83d319\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:10:01.077742 containerd[1478]: time="2025-05-17T00:10:01.077669192Z" level=info msg="CreateContainer within sandbox \"2c318a2973455c1f54e469390ebf7a4cd0866c5914bc578eb0d877cd1d83d319\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e914d5400db0799162218d0aed374232ed00ff1343f22ce6aa98c0ad381805d4\"" May 17 00:10:01.079185 containerd[1478]: time="2025-05-17T00:10:01.079152099Z" level=info msg="StartContainer for \"e914d5400db0799162218d0aed374232ed00ff1343f22ce6aa98c0ad381805d4\"" May 17 00:10:01.123700 systemd[1]: Started cri-containerd-e914d5400db0799162218d0aed374232ed00ff1343f22ce6aa98c0ad381805d4.scope - libcontainer container e914d5400db0799162218d0aed374232ed00ff1343f22ce6aa98c0ad381805d4. May 17 00:10:01.183043 containerd[1478]: time="2025-05-17T00:10:01.182987432Z" level=info msg="StartContainer for \"e914d5400db0799162218d0aed374232ed00ff1343f22ce6aa98c0ad381805d4\" returns successfully" May 17 00:10:01.712188 containerd[1478]: time="2025-05-17T00:10:01.712102716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:10:01.713663 containerd[1478]: time="2025-05-17T00:10:01.713616583Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=44453213" May 17 00:10:01.716517 containerd[1478]: time="2025-05-17T00:10:01.715506417Z" level=info msg="ImageCreate event name:\"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:10:01.720093 containerd[1478]: time="2025-05-17T00:10:01.720039658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:10:01.721127 containerd[1478]: time="2025-05-17T00:10:01.721086356Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"45822470\" in 3.618465017s" May 17 00:10:01.721127 containerd[1478]: time="2025-05-17T00:10:01.721122837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\"" May 17 00:10:01.722678 containerd[1478]: time="2025-05-17T00:10:01.722638224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 00:10:01.726240 containerd[1478]: time="2025-05-17T00:10:01.726205288Z" level=info msg="CreateContainer within sandbox \"4db8268c0873154b35bef0066044e1d6af482044b459dfa803bef728752535ba\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:10:01.747941 containerd[1478]: time="2025-05-17T00:10:01.747884835Z" level=info msg="CreateContainer within sandbox \"4db8268c0873154b35bef0066044e1d6af482044b459dfa803bef728752535ba\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b5e36d660a681582dbe04d906031a91b8fcb3b30c0fa7c3e5ec75dab6d818177\"" May 17 00:10:01.748672 containerd[1478]: time="2025-05-17T00:10:01.748523286Z" level=info msg="StartContainer for \"b5e36d660a681582dbe04d906031a91b8fcb3b30c0fa7c3e5ec75dab6d818177\"" May 17 00:10:01.800728 systemd[1]: Started cri-containerd-b5e36d660a681582dbe04d906031a91b8fcb3b30c0fa7c3e5ec75dab6d818177.scope - libcontainer container b5e36d660a681582dbe04d906031a91b8fcb3b30c0fa7c3e5ec75dab6d818177. May 17 00:10:01.853394 kubelet[2663]: I0517 00:10:01.853270 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5rmrz" podStartSLOduration=39.853249435 podStartE2EDuration="39.853249435s" podCreationTimestamp="2025-05-17 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:10:01.852784227 +0000 UTC m=+46.442626880" watchObservedRunningTime="2025-05-17 00:10:01.853249435 +0000 UTC m=+46.443092088" May 17 00:10:01.864496 containerd[1478]: time="2025-05-17T00:10:01.863999787Z" level=info msg="StartContainer for \"b5e36d660a681582dbe04d906031a91b8fcb3b30c0fa7c3e5ec75dab6d818177\" returns successfully" May 17 00:10:02.215308 kubelet[2663]: I0517 00:10:02.214599 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:10:02.696980 systemd[1]: run-containerd-runc-k8s.io-b5e36d660a681582dbe04d906031a91b8fcb3b30c0fa7c3e5ec75dab6d818177-runc.Ccy22Q.mount: Deactivated successfully. May 17 00:10:02.820614 systemd-networkd[1361]: cali6adb06cedb7: Gained IPv6LL May 17 00:10:02.867054 kubelet[2663]: I0517 00:10:02.866340 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-654599565b-85g84" podStartSLOduration=26.245537672 podStartE2EDuration="29.866300331s" podCreationTimestamp="2025-05-17 00:09:33 +0000 UTC" firstStartedPulling="2025-05-17 00:09:58.101693762 +0000 UTC m=+42.691536415" lastFinishedPulling="2025-05-17 00:10:01.722456421 +0000 UTC m=+46.312299074" observedRunningTime="2025-05-17 00:10:02.864729742 +0000 UTC m=+47.454572395" watchObservedRunningTime="2025-05-17 00:10:02.866300331 +0000 UTC m=+47.456142984" May 17 00:10:03.568207 containerd[1478]: time="2025-05-17T00:10:03.567321840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:10:03.569419 containerd[1478]: time="2025-05-17T00:10:03.569375677Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8226240" May 17 00:10:03.572100 containerd[1478]: time="2025-05-17T00:10:03.571124628Z" level=info msg="ImageCreate event name:\"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:10:03.574066 containerd[1478]: time="2025-05-17T00:10:03.574021280Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:10:03.575015 containerd[1478]: time="2025-05-17T00:10:03.574982378Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"9595481\" in 1.852303112s" May 17 00:10:03.575133 containerd[1478]: time="2025-05-17T00:10:03.575115140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\"" May 17 00:10:03.577338 containerd[1478]: time="2025-05-17T00:10:03.577311419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 17 00:10:03.580871 containerd[1478]: time="2025-05-17T00:10:03.580816562Z" level=info msg="CreateContainer within sandbox \"397e814ed288bdc3b2f38604f1a686ae9d938678ae3384aa254dfbc9b065f84d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 00:10:03.611422 containerd[1478]: time="2025-05-17T00:10:03.611374991Z" level=info msg="CreateContainer within sandbox \"397e814ed288bdc3b2f38604f1a686ae9d938678ae3384aa254dfbc9b065f84d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f98d6e1d4a168f0559790b25c26d50bacd659eaf63c1ac6e6246871d9d267537\"" May 17 00:10:03.615018 containerd[1478]: time="2025-05-17T00:10:03.614967576Z" level=info msg="StartContainer for \"f98d6e1d4a168f0559790b25c26d50bacd659eaf63c1ac6e6246871d9d267537\"" May 17 00:10:03.617724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount420678258.mount: Deactivated successfully. May 17 00:10:03.658794 systemd[1]: Started cri-containerd-f98d6e1d4a168f0559790b25c26d50bacd659eaf63c1ac6e6246871d9d267537.scope - libcontainer container f98d6e1d4a168f0559790b25c26d50bacd659eaf63c1ac6e6246871d9d267537. May 17 00:10:03.711382 containerd[1478]: time="2025-05-17T00:10:03.711324388Z" level=info msg="StartContainer for \"f98d6e1d4a168f0559790b25c26d50bacd659eaf63c1ac6e6246871d9d267537\" returns successfully" May 17 00:10:03.854989 kubelet[2663]: I0517 00:10:03.854279 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:10:05.796927 containerd[1478]: time="2025-05-17T00:10:05.796854452Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:10:05.798487 containerd[1478]: time="2025-05-17T00:10:05.798417120Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=48045219" May 17 00:10:05.799550 containerd[1478]: time="2025-05-17T00:10:05.799508700Z" level=info msg="ImageCreate event name:\"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:10:05.802009 containerd[1478]: time="2025-05-17T00:10:05.801922304Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:10:05.802884 containerd[1478]: time="2025-05-17T00:10:05.802740919Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"49414428\" in 2.225247936s" May 17 00:10:05.802884 containerd[1478]: time="2025-05-17T00:10:05.802782039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\"" May 17 00:10:05.805201 containerd[1478]: time="2025-05-17T00:10:05.805166483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:10:05.830005 containerd[1478]: time="2025-05-17T00:10:05.829821528Z" level=info msg="CreateContainer within sandbox \"e4691f2defc0e33fed4422e706ca0bfa4248b820561b8ae0221895b62213e76f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 17 00:10:05.853167 containerd[1478]: time="2025-05-17T00:10:05.853042588Z" level=info msg="CreateContainer within sandbox \"e4691f2defc0e33fed4422e706ca0bfa4248b820561b8ae0221895b62213e76f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"16b27db4f29ec172a1b389fd4655d4c00a2ed4f2c932d251341539949175a6bf\"" May 17 00:10:05.856473 containerd[1478]: time="2025-05-17T00:10:05.855779158Z" level=info msg="StartContainer for \"16b27db4f29ec172a1b389fd4655d4c00a2ed4f2c932d251341539949175a6bf\"" May 17 00:10:05.892763 systemd[1]: Started cri-containerd-16b27db4f29ec172a1b389fd4655d4c00a2ed4f2c932d251341539949175a6bf.scope - libcontainer container 16b27db4f29ec172a1b389fd4655d4c00a2ed4f2c932d251341539949175a6bf. May 17 00:10:05.933668 containerd[1478]: time="2025-05-17T00:10:05.933570485Z" level=info msg="StartContainer for \"16b27db4f29ec172a1b389fd4655d4c00a2ed4f2c932d251341539949175a6bf\" returns successfully" May 17 00:10:06.037142 containerd[1478]: time="2025-05-17T00:10:06.037029597Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:10:06.039111 containerd[1478]: time="2025-05-17T00:10:06.038917592Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:10:06.039111 containerd[1478]: time="2025-05-17T00:10:06.039060994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:10:06.039391 kubelet[2663]: E0517 00:10:06.039254 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:10:06.039391 kubelet[2663]: E0517 00:10:06.039327 2663 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:10:06.041636 kubelet[2663]: E0517 00:10:06.041489 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhfxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-gp74j_calico-system(662625c5-921c-406a-9ef1-d2e70e33e339): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:10:06.041761 containerd[1478]: time="2025-05-17T00:10:06.041706762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:10:06.043009 kubelet[2663]: E0517 00:10:06.042943 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gp74j" podUID="662625c5-921c-406a-9ef1-d2e70e33e339" May 17 00:10:06.421350 containerd[1478]: time="2025-05-17T00:10:06.421150844Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:10:06.424285 containerd[1478]: time="2025-05-17T00:10:06.424214900Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 17 00:10:06.428455 containerd[1478]: time="2025-05-17T00:10:06.428355535Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"45822470\" in 386.617772ms" May 17 00:10:06.428455 containerd[1478]: time="2025-05-17T00:10:06.428402536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\"" May 17 00:10:06.431111 containerd[1478]: time="2025-05-17T00:10:06.430649657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 00:10:06.437509 containerd[1478]: time="2025-05-17T00:10:06.437300777Z" level=info msg="CreateContainer within sandbox \"0183b29175c54aeb2849746f6b658eea615ed606e0fdf58e662125f0d99bf3e3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:10:06.455986 containerd[1478]: time="2025-05-17T00:10:06.455827233Z" level=info msg="CreateContainer within sandbox \"0183b29175c54aeb2849746f6b658eea615ed606e0fdf58e662125f0d99bf3e3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"87d6162505912feb3ee9e3d3a5182659f4836b932886120f3c748650f0a6c496\"" May 17 00:10:06.457135 containerd[1478]: time="2025-05-17T00:10:06.457102256Z" level=info msg="StartContainer for \"87d6162505912feb3ee9e3d3a5182659f4836b932886120f3c748650f0a6c496\"" May 17 00:10:06.494736 systemd[1]: Started cri-containerd-87d6162505912feb3ee9e3d3a5182659f4836b932886120f3c748650f0a6c496.scope - libcontainer container 87d6162505912feb3ee9e3d3a5182659f4836b932886120f3c748650f0a6c496. May 17 00:10:06.571344 containerd[1478]: time="2025-05-17T00:10:06.571297608Z" level=info msg="StartContainer for \"87d6162505912feb3ee9e3d3a5182659f4836b932886120f3c748650f0a6c496\" returns successfully" May 17 00:10:06.884862 kubelet[2663]: E0517 00:10:06.884701 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gp74j" podUID="662625c5-921c-406a-9ef1-d2e70e33e339" May 17 00:10:06.942409 kubelet[2663]: I0517 00:10:06.941834 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-645f967f78-7knkm" podStartSLOduration=24.167428851 podStartE2EDuration="30.941818248s" podCreationTimestamp="2025-05-17 00:09:36 +0000 UTC" firstStartedPulling="2025-05-17 00:09:59.029602264 +0000 UTC m=+43.619444917" lastFinishedPulling="2025-05-17 00:10:05.803991661 +0000 UTC m=+50.393834314" observedRunningTime="2025-05-17 00:10:06.94137496 +0000 UTC m=+51.531217613" watchObservedRunningTime="2025-05-17 00:10:06.941818248 +0000 UTC m=+51.531660901" May 17 00:10:06.942409 kubelet[2663]: I0517 00:10:06.942263 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-654599565b-lpjvt" podStartSLOduration=27.546401759 podStartE2EDuration="33.942197735s" podCreationTimestamp="2025-05-17 00:09:33 +0000 UTC" firstStartedPulling="2025-05-17 00:10:00.034670197 +0000 UTC m=+44.624512850" lastFinishedPulling="2025-05-17 00:10:06.430466173 +0000 UTC m=+51.020308826" observedRunningTime="2025-05-17 00:10:06.912396954 +0000 UTC m=+51.502239647" watchObservedRunningTime="2025-05-17 00:10:06.942197735 +0000 UTC m=+51.532040388" May 17 00:10:07.879864 kubelet[2663]: I0517 00:10:07.879807 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:10:08.553163 containerd[1478]: time="2025-05-17T00:10:08.551397310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:10:08.554661 containerd[1478]: time="2025-05-17T00:10:08.554621329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=13749925" May 17 00:10:08.556102 containerd[1478]: time="2025-05-17T00:10:08.556069676Z" level=info msg="ImageCreate event name:\"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:10:08.559469 containerd[1478]: time="2025-05-17T00:10:08.559414497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:10:08.560136 containerd[1478]: time="2025-05-17T00:10:08.560095749Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"15119118\" in 2.129408212s" May 17 00:10:08.560136 containerd[1478]: time="2025-05-17T00:10:08.560135510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\"" May 17 00:10:08.563956 containerd[1478]: time="2025-05-17T00:10:08.562604235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:10:08.569235 containerd[1478]: time="2025-05-17T00:10:08.569184235Z" level=info msg="CreateContainer within sandbox \"397e814ed288bdc3b2f38604f1a686ae9d938678ae3384aa254dfbc9b065f84d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 00:10:08.592959 containerd[1478]: time="2025-05-17T00:10:08.592716384Z" level=info msg="CreateContainer within sandbox \"397e814ed288bdc3b2f38604f1a686ae9d938678ae3384aa254dfbc9b065f84d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d69aef918142ee0040abd99922be06eec9dd9dc52385a3356e28763d6ba0d171\"" May 17 00:10:08.594539 containerd[1478]: time="2025-05-17T00:10:08.594161571Z" level=info msg="StartContainer for \"d69aef918142ee0040abd99922be06eec9dd9dc52385a3356e28763d6ba0d171\"" May 17 00:10:08.649642 systemd[1]: Started cri-containerd-d69aef918142ee0040abd99922be06eec9dd9dc52385a3356e28763d6ba0d171.scope - libcontainer container d69aef918142ee0040abd99922be06eec9dd9dc52385a3356e28763d6ba0d171. May 17 00:10:08.705872 containerd[1478]: time="2025-05-17T00:10:08.705225516Z" level=info msg="StartContainer for \"d69aef918142ee0040abd99922be06eec9dd9dc52385a3356e28763d6ba0d171\" returns successfully" May 17 00:10:08.818161 containerd[1478]: time="2025-05-17T00:10:08.817963933Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:10:08.820248 containerd[1478]: time="2025-05-17T00:10:08.820155053Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:10:08.820559 containerd[1478]: time="2025-05-17T00:10:08.820179413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:10:08.821112 kubelet[2663]: E0517 00:10:08.820763 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:10:08.821112 kubelet[2663]: E0517 00:10:08.820817 2663 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:10:08.821112 kubelet[2663]: E0517 00:10:08.820955 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:75cc68a695ad45e1ba6d530375dd3c59,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n5z7t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57fb66bd94-5kh2z_calico-system(2bf795f3-2e82-40ad-8128-ea6e3a8aa689): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:10:08.824040 containerd[1478]: time="2025-05-17T00:10:08.823991683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:10:08.905347 kubelet[2663]: I0517 00:10:08.905262 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-g9lf2" podStartSLOduration=22.526421662 podStartE2EDuration="32.905242405s" podCreationTimestamp="2025-05-17 00:09:36 +0000 UTC" firstStartedPulling="2025-05-17 00:09:58.183286603 +0000 UTC m=+42.773129256" lastFinishedPulling="2025-05-17 00:10:08.562107346 +0000 UTC m=+53.151949999" observedRunningTime="2025-05-17 00:10:08.904025743 +0000 UTC m=+53.493868396" watchObservedRunningTime="2025-05-17 00:10:08.905242405 +0000 UTC m=+53.495085018" May 17 00:10:09.058882 containerd[1478]: time="2025-05-17T00:10:09.058710207Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:10:09.060128 containerd[1478]: time="2025-05-17T00:10:09.059863188Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:10:09.060128 containerd[1478]: time="2025-05-17T00:10:09.059987950Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:10:09.060260 kubelet[2663]: E0517 00:10:09.060184 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:10:09.060260 kubelet[2663]: E0517 00:10:09.060249 2663 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:10:09.060776 kubelet[2663]: E0517 00:10:09.060386 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5z7t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57fb66bd94-5kh2z_calico-system(2bf795f3-2e82-40ad-8128-ea6e3a8aa689): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:10:09.061999 kubelet[2663]: E0517 00:10:09.061949 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57fb66bd94-5kh2z" podUID="2bf795f3-2e82-40ad-8128-ea6e3a8aa689" May 17 00:10:09.673835 kubelet[2663]: I0517 00:10:09.673695 2663 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 00:10:09.680499 kubelet[2663]: I0517 00:10:09.679546 2663 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 00:10:09.724481 kubelet[2663]: I0517 00:10:09.724046 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:10:15.200916 systemd[1]: run-containerd-runc-k8s.io-16b27db4f29ec172a1b389fd4655d4c00a2ed4f2c932d251341539949175a6bf-runc.dSnaHd.mount: Deactivated successfully. May 17 00:10:15.533179 containerd[1478]: time="2025-05-17T00:10:15.532803842Z" level=info msg="StopPodSandbox for \"18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a\"" May 17 00:10:15.655229 containerd[1478]: 2025-05-17 00:10:15.586 [WARNING][5202] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--lpjvt-eth0", GenerateName:"calico-apiserver-654599565b-", Namespace:"calico-apiserver", SelfLink:"", UID:"5021f4d9-7c3c-411f-be83-7378c09cd50b", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"654599565b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3b0dbcbd78", ContainerID:"0183b29175c54aeb2849746f6b658eea615ed606e0fdf58e662125f0d99bf3e3", Pod:"calico-apiserver-654599565b-lpjvt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif7019556d94", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:15.655229 containerd[1478]: 2025-05-17 00:10:15.587 [INFO][5202] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" May 17 00:10:15.655229 containerd[1478]: 2025-05-17 00:10:15.587 [INFO][5202] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" iface="eth0" netns="" May 17 00:10:15.655229 containerd[1478]: 2025-05-17 00:10:15.587 [INFO][5202] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" May 17 00:10:15.655229 containerd[1478]: 2025-05-17 00:10:15.587 [INFO][5202] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" May 17 00:10:15.655229 containerd[1478]: 2025-05-17 00:10:15.628 [INFO][5209] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" HandleID="k8s-pod-network.18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--lpjvt-eth0" May 17 00:10:15.655229 containerd[1478]: 2025-05-17 00:10:15.628 [INFO][5209] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:15.655229 containerd[1478]: 2025-05-17 00:10:15.629 [INFO][5209] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:15.655229 containerd[1478]: 2025-05-17 00:10:15.643 [WARNING][5209] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" HandleID="k8s-pod-network.18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--lpjvt-eth0" May 17 00:10:15.655229 containerd[1478]: 2025-05-17 00:10:15.643 [INFO][5209] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" HandleID="k8s-pod-network.18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--lpjvt-eth0" May 17 00:10:15.655229 containerd[1478]: 2025-05-17 00:10:15.646 [INFO][5209] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:15.655229 containerd[1478]: 2025-05-17 00:10:15.652 [INFO][5202] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" May 17 00:10:15.656054 containerd[1478]: time="2025-05-17T00:10:15.655996727Z" level=info msg="TearDown network for sandbox \"18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a\" successfully" May 17 00:10:15.656234 containerd[1478]: time="2025-05-17T00:10:15.656218011Z" level=info msg="StopPodSandbox for \"18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a\" returns successfully" May 17 00:10:15.657623 containerd[1478]: time="2025-05-17T00:10:15.657592997Z" level=info msg="RemovePodSandbox for \"18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a\"" May 17 00:10:15.667537 containerd[1478]: time="2025-05-17T00:10:15.667118574Z" level=info msg="Forcibly stopping sandbox \"18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a\"" May 17 00:10:15.761279 containerd[1478]: 2025-05-17 00:10:15.710 [WARNING][5224] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--lpjvt-eth0", GenerateName:"calico-apiserver-654599565b-", Namespace:"calico-apiserver", SelfLink:"", UID:"5021f4d9-7c3c-411f-be83-7378c09cd50b", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"654599565b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3b0dbcbd78", ContainerID:"0183b29175c54aeb2849746f6b658eea615ed606e0fdf58e662125f0d99bf3e3", Pod:"calico-apiserver-654599565b-lpjvt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif7019556d94", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:15.761279 containerd[1478]: 2025-05-17 00:10:15.710 [INFO][5224] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" May 17 00:10:15.761279 containerd[1478]: 2025-05-17 00:10:15.710 [INFO][5224] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" iface="eth0" netns="" May 17 00:10:15.761279 containerd[1478]: 2025-05-17 00:10:15.710 [INFO][5224] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" May 17 00:10:15.761279 containerd[1478]: 2025-05-17 00:10:15.710 [INFO][5224] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" May 17 00:10:15.761279 containerd[1478]: 2025-05-17 00:10:15.740 [INFO][5233] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" HandleID="k8s-pod-network.18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--lpjvt-eth0" May 17 00:10:15.761279 containerd[1478]: 2025-05-17 00:10:15.741 [INFO][5233] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:15.761279 containerd[1478]: 2025-05-17 00:10:15.741 [INFO][5233] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:15.761279 containerd[1478]: 2025-05-17 00:10:15.753 [WARNING][5233] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" HandleID="k8s-pod-network.18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--lpjvt-eth0" May 17 00:10:15.761279 containerd[1478]: 2025-05-17 00:10:15.754 [INFO][5233] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" HandleID="k8s-pod-network.18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--lpjvt-eth0" May 17 00:10:15.761279 containerd[1478]: 2025-05-17 00:10:15.756 [INFO][5233] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:15.761279 containerd[1478]: 2025-05-17 00:10:15.758 [INFO][5224] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a" May 17 00:10:15.762799 containerd[1478]: time="2025-05-17T00:10:15.762512464Z" level=info msg="TearDown network for sandbox \"18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a\" successfully" May 17 00:10:15.769882 containerd[1478]: time="2025-05-17T00:10:15.769625756Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:10:15.769882 containerd[1478]: time="2025-05-17T00:10:15.769747358Z" level=info msg="RemovePodSandbox \"18c13d8711afd4b40b972bb45bf741a1e264938dc84ce04796c814c09bc4320a\" returns successfully" May 17 00:10:15.773201 containerd[1478]: time="2025-05-17T00:10:15.773160301Z" level=info msg="StopPodSandbox for \"64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4\"" May 17 00:10:15.903902 containerd[1478]: 2025-05-17 00:10:15.842 [WARNING][5249] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--2pp2r-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6123d484-fe1b-4bcd-a0b1-5639f75795e9", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3b0dbcbd78", ContainerID:"60ab39a3cfe8e18880a85bd01fbed5dc68e692c73afbde601fecc06db3f86d78", Pod:"coredns-674b8bbfcf-2pp2r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0336039be35", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:15.903902 containerd[1478]: 2025-05-17 00:10:15.843 [INFO][5249] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" May 17 00:10:15.903902 containerd[1478]: 2025-05-17 00:10:15.843 [INFO][5249] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" iface="eth0" netns="" May 17 00:10:15.903902 containerd[1478]: 2025-05-17 00:10:15.843 [INFO][5249] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" May 17 00:10:15.903902 containerd[1478]: 2025-05-17 00:10:15.843 [INFO][5249] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" May 17 00:10:15.903902 containerd[1478]: 2025-05-17 00:10:15.885 [INFO][5257] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" HandleID="k8s-pod-network.64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--2pp2r-eth0" May 17 00:10:15.903902 containerd[1478]: 2025-05-17 00:10:15.886 [INFO][5257] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:15.903902 containerd[1478]: 2025-05-17 00:10:15.886 [INFO][5257] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:15.903902 containerd[1478]: 2025-05-17 00:10:15.896 [WARNING][5257] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" HandleID="k8s-pod-network.64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--2pp2r-eth0" May 17 00:10:15.903902 containerd[1478]: 2025-05-17 00:10:15.897 [INFO][5257] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" HandleID="k8s-pod-network.64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--2pp2r-eth0" May 17 00:10:15.903902 containerd[1478]: 2025-05-17 00:10:15.900 [INFO][5257] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:15.903902 containerd[1478]: 2025-05-17 00:10:15.902 [INFO][5249] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" May 17 00:10:15.903902 containerd[1478]: time="2025-05-17T00:10:15.903841486Z" level=info msg="TearDown network for sandbox \"64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4\" successfully" May 17 00:10:15.903902 containerd[1478]: time="2025-05-17T00:10:15.903872046Z" level=info msg="StopPodSandbox for \"64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4\" returns successfully" May 17 00:10:15.905852 containerd[1478]: time="2025-05-17T00:10:15.905284273Z" level=info msg="RemovePodSandbox for \"64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4\"" May 17 00:10:15.905852 containerd[1478]: time="2025-05-17T00:10:15.905320673Z" level=info msg="Forcibly stopping sandbox \"64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4\"" May 17 00:10:16.012302 containerd[1478]: 2025-05-17 00:10:15.956 [WARNING][5271] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--2pp2r-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6123d484-fe1b-4bcd-a0b1-5639f75795e9", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3b0dbcbd78", ContainerID:"60ab39a3cfe8e18880a85bd01fbed5dc68e692c73afbde601fecc06db3f86d78", Pod:"coredns-674b8bbfcf-2pp2r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0336039be35", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:16.012302 containerd[1478]: 2025-05-17 00:10:15.956 [INFO][5271] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" May 17 00:10:16.012302 containerd[1478]: 2025-05-17 00:10:15.956 [INFO][5271] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" iface="eth0" netns="" May 17 00:10:16.012302 containerd[1478]: 2025-05-17 00:10:15.956 [INFO][5271] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" May 17 00:10:16.012302 containerd[1478]: 2025-05-17 00:10:15.956 [INFO][5271] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" May 17 00:10:16.012302 containerd[1478]: 2025-05-17 00:10:15.993 [INFO][5278] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" HandleID="k8s-pod-network.64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--2pp2r-eth0" May 17 00:10:16.012302 containerd[1478]: 2025-05-17 00:10:15.993 [INFO][5278] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:16.012302 containerd[1478]: 2025-05-17 00:10:15.993 [INFO][5278] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:16.012302 containerd[1478]: 2025-05-17 00:10:16.005 [WARNING][5278] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" HandleID="k8s-pod-network.64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--2pp2r-eth0" May 17 00:10:16.012302 containerd[1478]: 2025-05-17 00:10:16.005 [INFO][5278] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" HandleID="k8s-pod-network.64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--2pp2r-eth0" May 17 00:10:16.012302 containerd[1478]: 2025-05-17 00:10:16.007 [INFO][5278] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:16.012302 containerd[1478]: 2025-05-17 00:10:16.010 [INFO][5271] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4" May 17 00:10:16.012748 containerd[1478]: time="2025-05-17T00:10:16.012350180Z" level=info msg="TearDown network for sandbox \"64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4\" successfully" May 17 00:10:16.016868 containerd[1478]: time="2025-05-17T00:10:16.016741541Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:10:16.017035 containerd[1478]: time="2025-05-17T00:10:16.016893984Z" level=info msg="RemovePodSandbox \"64b5464b82b71cf5c7889a3d48c6d134a2961189460abaf414c6c3ef08da76d4\" returns successfully" May 17 00:10:16.017713 containerd[1478]: time="2025-05-17T00:10:16.017680599Z" level=info msg="StopPodSandbox for \"b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f\"" May 17 00:10:16.125652 containerd[1478]: 2025-05-17 00:10:16.069 [WARNING][5292] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-whisker--b858dbd84--d62lt-eth0" May 17 00:10:16.125652 containerd[1478]: 2025-05-17 00:10:16.070 [INFO][5292] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" May 17 00:10:16.125652 containerd[1478]: 2025-05-17 00:10:16.070 [INFO][5292] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" iface="eth0" netns="" May 17 00:10:16.125652 containerd[1478]: 2025-05-17 00:10:16.070 [INFO][5292] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" May 17 00:10:16.125652 containerd[1478]: 2025-05-17 00:10:16.070 [INFO][5292] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" May 17 00:10:16.125652 containerd[1478]: 2025-05-17 00:10:16.103 [INFO][5299] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" HandleID="k8s-pod-network.b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-whisker--b858dbd84--d62lt-eth0" May 17 00:10:16.125652 containerd[1478]: 2025-05-17 00:10:16.104 [INFO][5299] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:16.125652 containerd[1478]: 2025-05-17 00:10:16.104 [INFO][5299] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:16.125652 containerd[1478]: 2025-05-17 00:10:16.118 [WARNING][5299] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" HandleID="k8s-pod-network.b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-whisker--b858dbd84--d62lt-eth0" May 17 00:10:16.125652 containerd[1478]: 2025-05-17 00:10:16.118 [INFO][5299] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" HandleID="k8s-pod-network.b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-whisker--b858dbd84--d62lt-eth0" May 17 00:10:16.125652 containerd[1478]: 2025-05-17 00:10:16.120 [INFO][5299] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:16.125652 containerd[1478]: 2025-05-17 00:10:16.124 [INFO][5292] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" May 17 00:10:16.125652 containerd[1478]: time="2025-05-17T00:10:16.125646046Z" level=info msg="TearDown network for sandbox \"b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f\" successfully" May 17 00:10:16.126786 containerd[1478]: time="2025-05-17T00:10:16.125672847Z" level=info msg="StopPodSandbox for \"b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f\" returns successfully" May 17 00:10:16.126786 containerd[1478]: time="2025-05-17T00:10:16.126210977Z" level=info msg="RemovePodSandbox for \"b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f\"" May 17 00:10:16.126786 containerd[1478]: time="2025-05-17T00:10:16.126242217Z" level=info msg="Forcibly stopping sandbox \"b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f\"" May 17 00:10:16.227346 containerd[1478]: 2025-05-17 00:10:16.177 [WARNING][5313] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" WorkloadEndpoint="ci--4081--3--3--n--3b0dbcbd78-k8s-whisker--b858dbd84--d62lt-eth0" May 17 00:10:16.227346 containerd[1478]: 2025-05-17 00:10:16.178 [INFO][5313] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" May 17 00:10:16.227346 containerd[1478]: 2025-05-17 00:10:16.178 [INFO][5313] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" iface="eth0" netns="" May 17 00:10:16.227346 containerd[1478]: 2025-05-17 00:10:16.178 [INFO][5313] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" May 17 00:10:16.227346 containerd[1478]: 2025-05-17 00:10:16.178 [INFO][5313] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" May 17 00:10:16.227346 containerd[1478]: 2025-05-17 00:10:16.208 [INFO][5320] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" HandleID="k8s-pod-network.b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-whisker--b858dbd84--d62lt-eth0" May 17 00:10:16.227346 containerd[1478]: 2025-05-17 00:10:16.209 [INFO][5320] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:16.227346 containerd[1478]: 2025-05-17 00:10:16.209 [INFO][5320] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:16.227346 containerd[1478]: 2025-05-17 00:10:16.220 [WARNING][5320] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" HandleID="k8s-pod-network.b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-whisker--b858dbd84--d62lt-eth0" May 17 00:10:16.227346 containerd[1478]: 2025-05-17 00:10:16.220 [INFO][5320] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" HandleID="k8s-pod-network.b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-whisker--b858dbd84--d62lt-eth0" May 17 00:10:16.227346 containerd[1478]: 2025-05-17 00:10:16.222 [INFO][5320] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:16.227346 containerd[1478]: 2025-05-17 00:10:16.224 [INFO][5313] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f" May 17 00:10:16.227346 containerd[1478]: time="2025-05-17T00:10:16.226203076Z" level=info msg="TearDown network for sandbox \"b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f\" successfully" May 17 00:10:16.233336 containerd[1478]: time="2025-05-17T00:10:16.232929041Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:10:16.233336 containerd[1478]: time="2025-05-17T00:10:16.233025083Z" level=info msg="RemovePodSandbox \"b76f8d3124745aa4c30e56df04e5023358d57e8d0bd995d470885425a0a5119f\" returns successfully" May 17 00:10:16.234085 containerd[1478]: time="2025-05-17T00:10:16.233511972Z" level=info msg="StopPodSandbox for \"80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227\"" May 17 00:10:16.351837 containerd[1478]: 2025-05-17 00:10:16.295 [WARNING][5335] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3b0dbcbd78-k8s-calico--kube--controllers--645f967f78--7knkm-eth0", GenerateName:"calico-kube-controllers-645f967f78-", Namespace:"calico-system", SelfLink:"", UID:"e26e8c4a-c5e6-41d8-924e-19ce27b5f4cd", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"645f967f78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3b0dbcbd78", ContainerID:"e4691f2defc0e33fed4422e706ca0bfa4248b820561b8ae0221895b62213e76f", Pod:"calico-kube-controllers-645f967f78-7knkm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali18b610d7672", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:16.351837 containerd[1478]: 2025-05-17 00:10:16.296 [INFO][5335] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" May 17 00:10:16.351837 containerd[1478]: 2025-05-17 00:10:16.296 [INFO][5335] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" iface="eth0" netns="" May 17 00:10:16.351837 containerd[1478]: 2025-05-17 00:10:16.296 [INFO][5335] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" May 17 00:10:16.351837 containerd[1478]: 2025-05-17 00:10:16.296 [INFO][5335] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" May 17 00:10:16.351837 containerd[1478]: 2025-05-17 00:10:16.334 [INFO][5342] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" HandleID="k8s-pod-network.80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--kube--controllers--645f967f78--7knkm-eth0" May 17 00:10:16.351837 containerd[1478]: 2025-05-17 00:10:16.335 [INFO][5342] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:16.351837 containerd[1478]: 2025-05-17 00:10:16.335 [INFO][5342] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:16.351837 containerd[1478]: 2025-05-17 00:10:16.345 [WARNING][5342] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" HandleID="k8s-pod-network.80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--kube--controllers--645f967f78--7knkm-eth0" May 17 00:10:16.351837 containerd[1478]: 2025-05-17 00:10:16.345 [INFO][5342] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" HandleID="k8s-pod-network.80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--kube--controllers--645f967f78--7knkm-eth0" May 17 00:10:16.351837 containerd[1478]: 2025-05-17 00:10:16.347 [INFO][5342] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:16.351837 containerd[1478]: 2025-05-17 00:10:16.350 [INFO][5335] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" May 17 00:10:16.351837 containerd[1478]: time="2025-05-17T00:10:16.351709249Z" level=info msg="TearDown network for sandbox \"80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227\" successfully" May 17 00:10:16.351837 containerd[1478]: time="2025-05-17T00:10:16.351734530Z" level=info msg="StopPodSandbox for \"80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227\" returns successfully" May 17 00:10:16.355040 containerd[1478]: time="2025-05-17T00:10:16.354549862Z" level=info msg="RemovePodSandbox for \"80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227\"" May 17 00:10:16.355040 containerd[1478]: time="2025-05-17T00:10:16.354681265Z" level=info msg="Forcibly stopping sandbox \"80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227\"" May 17 00:10:16.475111 containerd[1478]: 2025-05-17 00:10:16.420 [WARNING][5356] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3b0dbcbd78-k8s-calico--kube--controllers--645f967f78--7knkm-eth0", GenerateName:"calico-kube-controllers-645f967f78-", Namespace:"calico-system", SelfLink:"", UID:"e26e8c4a-c5e6-41d8-924e-19ce27b5f4cd", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"645f967f78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3b0dbcbd78", ContainerID:"e4691f2defc0e33fed4422e706ca0bfa4248b820561b8ae0221895b62213e76f", Pod:"calico-kube-controllers-645f967f78-7knkm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali18b610d7672", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:16.475111 containerd[1478]: 2025-05-17 00:10:16.420 [INFO][5356] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" May 17 00:10:16.475111 containerd[1478]: 2025-05-17 00:10:16.420 [INFO][5356] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" iface="eth0" netns="" May 17 00:10:16.475111 containerd[1478]: 2025-05-17 00:10:16.420 [INFO][5356] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" May 17 00:10:16.475111 containerd[1478]: 2025-05-17 00:10:16.420 [INFO][5356] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" May 17 00:10:16.475111 containerd[1478]: 2025-05-17 00:10:16.453 [INFO][5363] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" HandleID="k8s-pod-network.80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--kube--controllers--645f967f78--7knkm-eth0" May 17 00:10:16.475111 containerd[1478]: 2025-05-17 00:10:16.453 [INFO][5363] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:16.475111 containerd[1478]: 2025-05-17 00:10:16.453 [INFO][5363] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:16.475111 containerd[1478]: 2025-05-17 00:10:16.468 [WARNING][5363] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" HandleID="k8s-pod-network.80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--kube--controllers--645f967f78--7knkm-eth0" May 17 00:10:16.475111 containerd[1478]: 2025-05-17 00:10:16.468 [INFO][5363] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" HandleID="k8s-pod-network.80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--kube--controllers--645f967f78--7knkm-eth0" May 17 00:10:16.475111 containerd[1478]: 2025-05-17 00:10:16.470 [INFO][5363] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:16.475111 containerd[1478]: 2025-05-17 00:10:16.473 [INFO][5356] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227" May 17 00:10:16.476588 containerd[1478]: time="2025-05-17T00:10:16.475702715Z" level=info msg="TearDown network for sandbox \"80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227\" successfully" May 17 00:10:16.480815 containerd[1478]: time="2025-05-17T00:10:16.479801471Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:10:16.481054 containerd[1478]: time="2025-05-17T00:10:16.481028254Z" level=info msg="RemovePodSandbox \"80dcc9479798289492b29345865fd69c904dec7b796e1b7f06108c5e7b5a8227\" returns successfully" May 17 00:10:16.482988 containerd[1478]: time="2025-05-17T00:10:16.482677485Z" level=info msg="StopPodSandbox for \"48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36\"" May 17 00:10:16.578761 containerd[1478]: 2025-05-17 00:10:16.532 [WARNING][5377] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3b0dbcbd78-k8s-goldmane--78d55f7ddc--gp74j-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"662625c5-921c-406a-9ef1-d2e70e33e339", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3b0dbcbd78", ContainerID:"4480f08a55ea7f16a9727a670d1d7c413bfacd407b0d80da07415f3715c5e018", Pod:"goldmane-78d55f7ddc-gp74j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.2.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1e62d21233b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:16.578761 containerd[1478]: 2025-05-17 00:10:16.532 [INFO][5377] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" May 17 00:10:16.578761 containerd[1478]: 2025-05-17 00:10:16.532 [INFO][5377] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" iface="eth0" netns="" May 17 00:10:16.578761 containerd[1478]: 2025-05-17 00:10:16.532 [INFO][5377] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" May 17 00:10:16.578761 containerd[1478]: 2025-05-17 00:10:16.532 [INFO][5377] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" May 17 00:10:16.578761 containerd[1478]: 2025-05-17 00:10:16.558 [INFO][5384] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" HandleID="k8s-pod-network.48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-goldmane--78d55f7ddc--gp74j-eth0" May 17 00:10:16.578761 containerd[1478]: 2025-05-17 00:10:16.559 [INFO][5384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:16.578761 containerd[1478]: 2025-05-17 00:10:16.559 [INFO][5384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:16.578761 containerd[1478]: 2025-05-17 00:10:16.569 [WARNING][5384] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" HandleID="k8s-pod-network.48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-goldmane--78d55f7ddc--gp74j-eth0" May 17 00:10:16.578761 containerd[1478]: 2025-05-17 00:10:16.569 [INFO][5384] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" HandleID="k8s-pod-network.48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-goldmane--78d55f7ddc--gp74j-eth0" May 17 00:10:16.578761 containerd[1478]: 2025-05-17 00:10:16.572 [INFO][5384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:16.578761 containerd[1478]: 2025-05-17 00:10:16.575 [INFO][5377] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" May 17 00:10:16.581055 containerd[1478]: time="2025-05-17T00:10:16.579869652Z" level=info msg="TearDown network for sandbox \"48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36\" successfully" May 17 00:10:16.581055 containerd[1478]: time="2025-05-17T00:10:16.579908932Z" level=info msg="StopPodSandbox for \"48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36\" returns successfully" May 17 00:10:16.581055 containerd[1478]: time="2025-05-17T00:10:16.580633546Z" level=info msg="RemovePodSandbox for \"48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36\"" May 17 00:10:16.581055 containerd[1478]: time="2025-05-17T00:10:16.580664386Z" level=info msg="Forcibly stopping sandbox \"48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36\"" May 17 00:10:16.692764 containerd[1478]: 2025-05-17 00:10:16.635 [WARNING][5399] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3b0dbcbd78-k8s-goldmane--78d55f7ddc--gp74j-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"662625c5-921c-406a-9ef1-d2e70e33e339", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3b0dbcbd78", ContainerID:"4480f08a55ea7f16a9727a670d1d7c413bfacd407b0d80da07415f3715c5e018", Pod:"goldmane-78d55f7ddc-gp74j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.2.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1e62d21233b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:16.692764 containerd[1478]: 2025-05-17 00:10:16.635 [INFO][5399] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" May 17 00:10:16.692764 containerd[1478]: 2025-05-17 00:10:16.635 [INFO][5399] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" iface="eth0" netns="" May 17 00:10:16.692764 containerd[1478]: 2025-05-17 00:10:16.636 [INFO][5399] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" May 17 00:10:16.692764 containerd[1478]: 2025-05-17 00:10:16.636 [INFO][5399] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" May 17 00:10:16.692764 containerd[1478]: 2025-05-17 00:10:16.663 [INFO][5407] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" HandleID="k8s-pod-network.48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-goldmane--78d55f7ddc--gp74j-eth0" May 17 00:10:16.692764 containerd[1478]: 2025-05-17 00:10:16.663 [INFO][5407] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:16.692764 containerd[1478]: 2025-05-17 00:10:16.663 [INFO][5407] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:16.692764 containerd[1478]: 2025-05-17 00:10:16.684 [WARNING][5407] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" HandleID="k8s-pod-network.48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-goldmane--78d55f7ddc--gp74j-eth0" May 17 00:10:16.692764 containerd[1478]: 2025-05-17 00:10:16.684 [INFO][5407] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" HandleID="k8s-pod-network.48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-goldmane--78d55f7ddc--gp74j-eth0" May 17 00:10:16.692764 containerd[1478]: 2025-05-17 00:10:16.687 [INFO][5407] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:16.692764 containerd[1478]: 2025-05-17 00:10:16.690 [INFO][5399] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36" May 17 00:10:16.694388 containerd[1478]: time="2025-05-17T00:10:16.692741550Z" level=info msg="TearDown network for sandbox \"48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36\" successfully" May 17 00:10:16.712476 containerd[1478]: time="2025-05-17T00:10:16.711939347Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:10:16.712476 containerd[1478]: time="2025-05-17T00:10:16.712050509Z" level=info msg="RemovePodSandbox \"48967515e7ac1ae937a4c1d9fb98f9118c2ae302024bc4824d15d9e91f8bfa36\" returns successfully" May 17 00:10:16.713672 containerd[1478]: time="2025-05-17T00:10:16.712924806Z" level=info msg="StopPodSandbox for \"acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633\"" May 17 00:10:16.813664 containerd[1478]: 2025-05-17 00:10:16.768 [WARNING][5421] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--5rmrz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b9015c7c-53fd-4356-9465-8e7b4b00eae6", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3b0dbcbd78", ContainerID:"2c318a2973455c1f54e469390ebf7a4cd0866c5914bc578eb0d877cd1d83d319", Pod:"coredns-674b8bbfcf-5rmrz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6adb06cedb7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:16.813664 containerd[1478]: 2025-05-17 00:10:16.768 [INFO][5421] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" May 17 00:10:16.813664 containerd[1478]: 2025-05-17 00:10:16.769 [INFO][5421] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" iface="eth0" netns="" May 17 00:10:16.813664 containerd[1478]: 2025-05-17 00:10:16.769 [INFO][5421] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" May 17 00:10:16.813664 containerd[1478]: 2025-05-17 00:10:16.769 [INFO][5421] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" May 17 00:10:16.813664 containerd[1478]: 2025-05-17 00:10:16.794 [INFO][5429] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" HandleID="k8s-pod-network.acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--5rmrz-eth0" May 17 00:10:16.813664 containerd[1478]: 2025-05-17 00:10:16.794 [INFO][5429] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:16.813664 containerd[1478]: 2025-05-17 00:10:16.794 [INFO][5429] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:16.813664 containerd[1478]: 2025-05-17 00:10:16.804 [WARNING][5429] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" HandleID="k8s-pod-network.acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--5rmrz-eth0" May 17 00:10:16.813664 containerd[1478]: 2025-05-17 00:10:16.804 [INFO][5429] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" HandleID="k8s-pod-network.acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--5rmrz-eth0" May 17 00:10:16.813664 containerd[1478]: 2025-05-17 00:10:16.807 [INFO][5429] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:16.813664 containerd[1478]: 2025-05-17 00:10:16.811 [INFO][5421] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" May 17 00:10:16.816286 containerd[1478]: time="2025-05-17T00:10:16.814030325Z" level=info msg="TearDown network for sandbox \"acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633\" successfully" May 17 00:10:16.816286 containerd[1478]: time="2025-05-17T00:10:16.814062086Z" level=info msg="StopPodSandbox for \"acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633\" returns successfully" May 17 00:10:16.816286 containerd[1478]: time="2025-05-17T00:10:16.816123244Z" level=info msg="RemovePodSandbox for \"acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633\"" May 17 00:10:16.816286 containerd[1478]: time="2025-05-17T00:10:16.816158525Z" level=info msg="Forcibly stopping sandbox \"acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633\"" May 17 00:10:16.970165 containerd[1478]: 2025-05-17 00:10:16.901 [WARNING][5443] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--5rmrz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b9015c7c-53fd-4356-9465-8e7b4b00eae6", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3b0dbcbd78", ContainerID:"2c318a2973455c1f54e469390ebf7a4cd0866c5914bc578eb0d877cd1d83d319", Pod:"coredns-674b8bbfcf-5rmrz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6adb06cedb7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:16.970165 containerd[1478]: 2025-05-17 00:10:16.901 [INFO][5443] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" May 17 00:10:16.970165 containerd[1478]: 2025-05-17 00:10:16.901 [INFO][5443] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" iface="eth0" netns="" May 17 00:10:16.970165 containerd[1478]: 2025-05-17 00:10:16.901 [INFO][5443] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" May 17 00:10:16.970165 containerd[1478]: 2025-05-17 00:10:16.901 [INFO][5443] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" May 17 00:10:16.970165 containerd[1478]: 2025-05-17 00:10:16.950 [INFO][5450] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" HandleID="k8s-pod-network.acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--5rmrz-eth0" May 17 00:10:16.970165 containerd[1478]: 2025-05-17 00:10:16.951 [INFO][5450] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:16.970165 containerd[1478]: 2025-05-17 00:10:16.952 [INFO][5450] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:16.970165 containerd[1478]: 2025-05-17 00:10:16.963 [WARNING][5450] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" HandleID="k8s-pod-network.acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--5rmrz-eth0" May 17 00:10:16.970165 containerd[1478]: 2025-05-17 00:10:16.963 [INFO][5450] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" HandleID="k8s-pod-network.acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-coredns--674b8bbfcf--5rmrz-eth0" May 17 00:10:16.970165 containerd[1478]: 2025-05-17 00:10:16.965 [INFO][5450] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:16.970165 containerd[1478]: 2025-05-17 00:10:16.967 [INFO][5443] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633" May 17 00:10:16.970625 containerd[1478]: time="2025-05-17T00:10:16.970210549Z" level=info msg="TearDown network for sandbox \"acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633\" successfully" May 17 00:10:16.975370 containerd[1478]: time="2025-05-17T00:10:16.975312764Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:10:16.975529 containerd[1478]: time="2025-05-17T00:10:16.975394966Z" level=info msg="RemovePodSandbox \"acccd57f2ac1bc78c3018663394757376b3ac7754d0a8c5a1d496d687f3e7633\" returns successfully" May 17 00:10:16.977124 containerd[1478]: time="2025-05-17T00:10:16.976743831Z" level=info msg="StopPodSandbox for \"6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be\"" May 17 00:10:17.080090 containerd[1478]: 2025-05-17 00:10:17.038 [WARNING][5464] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--85g84-eth0", GenerateName:"calico-apiserver-654599565b-", Namespace:"calico-apiserver", SelfLink:"", UID:"cc5a0d8a-21f4-4d9d-bd13-f95a58141562", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"654599565b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3b0dbcbd78", ContainerID:"4db8268c0873154b35bef0066044e1d6af482044b459dfa803bef728752535ba", Pod:"calico-apiserver-654599565b-85g84", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali039ce6022f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:17.080090 containerd[1478]: 2025-05-17 00:10:17.039 [INFO][5464] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" May 17 00:10:17.080090 containerd[1478]: 2025-05-17 00:10:17.039 [INFO][5464] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" iface="eth0" netns="" May 17 00:10:17.080090 containerd[1478]: 2025-05-17 00:10:17.039 [INFO][5464] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" May 17 00:10:17.080090 containerd[1478]: 2025-05-17 00:10:17.039 [INFO][5464] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" May 17 00:10:17.080090 containerd[1478]: 2025-05-17 00:10:17.061 [INFO][5471] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" HandleID="k8s-pod-network.6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--85g84-eth0" May 17 00:10:17.080090 containerd[1478]: 2025-05-17 00:10:17.061 [INFO][5471] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:17.080090 containerd[1478]: 2025-05-17 00:10:17.061 [INFO][5471] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:17.080090 containerd[1478]: 2025-05-17 00:10:17.072 [WARNING][5471] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" HandleID="k8s-pod-network.6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--85g84-eth0" May 17 00:10:17.080090 containerd[1478]: 2025-05-17 00:10:17.072 [INFO][5471] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" HandleID="k8s-pod-network.6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--85g84-eth0" May 17 00:10:17.080090 containerd[1478]: 2025-05-17 00:10:17.074 [INFO][5471] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:17.080090 containerd[1478]: 2025-05-17 00:10:17.076 [INFO][5464] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" May 17 00:10:17.081526 containerd[1478]: time="2025-05-17T00:10:17.080587285Z" level=info msg="TearDown network for sandbox \"6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be\" successfully" May 17 00:10:17.081526 containerd[1478]: time="2025-05-17T00:10:17.080617005Z" level=info msg="StopPodSandbox for \"6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be\" returns successfully" May 17 00:10:17.082821 containerd[1478]: time="2025-05-17T00:10:17.082781606Z" level=info msg="RemovePodSandbox for \"6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be\"" May 17 00:10:17.082900 containerd[1478]: time="2025-05-17T00:10:17.082832767Z" level=info msg="Forcibly stopping sandbox \"6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be\"" May 17 00:10:17.178195 containerd[1478]: 2025-05-17 00:10:17.130 [WARNING][5485] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--85g84-eth0", GenerateName:"calico-apiserver-654599565b-", Namespace:"calico-apiserver", SelfLink:"", UID:"cc5a0d8a-21f4-4d9d-bd13-f95a58141562", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"654599565b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3b0dbcbd78", ContainerID:"4db8268c0873154b35bef0066044e1d6af482044b459dfa803bef728752535ba", Pod:"calico-apiserver-654599565b-85g84", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali039ce6022f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:17.178195 containerd[1478]: 2025-05-17 00:10:17.131 [INFO][5485] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" May 17 00:10:17.178195 containerd[1478]: 2025-05-17 00:10:17.131 [INFO][5485] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" iface="eth0" netns="" May 17 00:10:17.178195 containerd[1478]: 2025-05-17 00:10:17.131 [INFO][5485] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" May 17 00:10:17.178195 containerd[1478]: 2025-05-17 00:10:17.131 [INFO][5485] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" May 17 00:10:17.178195 containerd[1478]: 2025-05-17 00:10:17.158 [INFO][5492] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" HandleID="k8s-pod-network.6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--85g84-eth0" May 17 00:10:17.178195 containerd[1478]: 2025-05-17 00:10:17.159 [INFO][5492] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:17.178195 containerd[1478]: 2025-05-17 00:10:17.159 [INFO][5492] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:17.178195 containerd[1478]: 2025-05-17 00:10:17.172 [WARNING][5492] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" HandleID="k8s-pod-network.6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--85g84-eth0" May 17 00:10:17.178195 containerd[1478]: 2025-05-17 00:10:17.172 [INFO][5492] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" HandleID="k8s-pod-network.6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-calico--apiserver--654599565b--85g84-eth0" May 17 00:10:17.178195 containerd[1478]: 2025-05-17 00:10:17.175 [INFO][5492] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:17.178195 containerd[1478]: 2025-05-17 00:10:17.176 [INFO][5485] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be" May 17 00:10:17.179674 containerd[1478]: time="2025-05-17T00:10:17.178247344Z" level=info msg="TearDown network for sandbox \"6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be\" successfully" May 17 00:10:17.182284 containerd[1478]: time="2025-05-17T00:10:17.182231138Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:10:17.182486 containerd[1478]: time="2025-05-17T00:10:17.182322060Z" level=info msg="RemovePodSandbox \"6d99f934cb209fcc4f03e983358bb84f11a808a72b609ad33f4809382dbcf3be\" returns successfully" May 17 00:10:17.183155 containerd[1478]: time="2025-05-17T00:10:17.182888431Z" level=info msg="StopPodSandbox for \"405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9\"" May 17 00:10:17.293524 containerd[1478]: 2025-05-17 00:10:17.239 [WARNING][5506] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3b0dbcbd78-k8s-csi--node--driver--g9lf2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bf7f5509-6eae-41c9-a82c-194eb8fdf825", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3b0dbcbd78", ContainerID:"397e814ed288bdc3b2f38604f1a686ae9d938678ae3384aa254dfbc9b065f84d", Pod:"csi-node-driver-g9lf2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0056da4d310", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:17.293524 containerd[1478]: 2025-05-17 00:10:17.240 [INFO][5506] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" May 17 00:10:17.293524 containerd[1478]: 2025-05-17 00:10:17.240 [INFO][5506] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" iface="eth0" netns="" May 17 00:10:17.293524 containerd[1478]: 2025-05-17 00:10:17.240 [INFO][5506] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" May 17 00:10:17.293524 containerd[1478]: 2025-05-17 00:10:17.240 [INFO][5506] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" May 17 00:10:17.293524 containerd[1478]: 2025-05-17 00:10:17.267 [INFO][5513] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" HandleID="k8s-pod-network.405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-csi--node--driver--g9lf2-eth0" May 17 00:10:17.293524 containerd[1478]: 2025-05-17 00:10:17.267 [INFO][5513] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:17.293524 containerd[1478]: 2025-05-17 00:10:17.267 [INFO][5513] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:17.293524 containerd[1478]: 2025-05-17 00:10:17.284 [WARNING][5513] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" HandleID="k8s-pod-network.405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-csi--node--driver--g9lf2-eth0" May 17 00:10:17.293524 containerd[1478]: 2025-05-17 00:10:17.284 [INFO][5513] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" HandleID="k8s-pod-network.405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-csi--node--driver--g9lf2-eth0" May 17 00:10:17.293524 containerd[1478]: 2025-05-17 00:10:17.288 [INFO][5513] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:17.293524 containerd[1478]: 2025-05-17 00:10:17.289 [INFO][5506] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" May 17 00:10:17.293524 containerd[1478]: time="2025-05-17T00:10:17.293357289Z" level=info msg="TearDown network for sandbox \"405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9\" successfully" May 17 00:10:17.293524 containerd[1478]: time="2025-05-17T00:10:17.293384969Z" level=info msg="StopPodSandbox for \"405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9\" returns successfully" May 17 00:10:17.294687 containerd[1478]: time="2025-05-17T00:10:17.294643673Z" level=info msg="RemovePodSandbox for \"405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9\"" May 17 00:10:17.294818 containerd[1478]: time="2025-05-17T00:10:17.294700714Z" level=info msg="Forcibly stopping sandbox \"405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9\"" May 17 00:10:17.383571 containerd[1478]: 2025-05-17 00:10:17.337 [WARNING][5527] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3b0dbcbd78-k8s-csi--node--driver--g9lf2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bf7f5509-6eae-41c9-a82c-194eb8fdf825", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3b0dbcbd78", ContainerID:"397e814ed288bdc3b2f38604f1a686ae9d938678ae3384aa254dfbc9b065f84d", Pod:"csi-node-driver-g9lf2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0056da4d310", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:10:17.383571 containerd[1478]: 2025-05-17 00:10:17.337 [INFO][5527] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" May 17 00:10:17.383571 containerd[1478]: 2025-05-17 00:10:17.337 [INFO][5527] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" iface="eth0" netns="" May 17 00:10:17.383571 containerd[1478]: 2025-05-17 00:10:17.337 [INFO][5527] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" May 17 00:10:17.383571 containerd[1478]: 2025-05-17 00:10:17.337 [INFO][5527] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" May 17 00:10:17.383571 containerd[1478]: 2025-05-17 00:10:17.361 [INFO][5534] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" HandleID="k8s-pod-network.405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-csi--node--driver--g9lf2-eth0" May 17 00:10:17.383571 containerd[1478]: 2025-05-17 00:10:17.361 [INFO][5534] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:10:17.383571 containerd[1478]: 2025-05-17 00:10:17.361 [INFO][5534] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:10:17.383571 containerd[1478]: 2025-05-17 00:10:17.376 [WARNING][5534] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" HandleID="k8s-pod-network.405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-csi--node--driver--g9lf2-eth0" May 17 00:10:17.383571 containerd[1478]: 2025-05-17 00:10:17.376 [INFO][5534] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" HandleID="k8s-pod-network.405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" Workload="ci--4081--3--3--n--3b0dbcbd78-k8s-csi--node--driver--g9lf2-eth0" May 17 00:10:17.383571 containerd[1478]: 2025-05-17 00:10:17.378 [INFO][5534] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:10:17.383571 containerd[1478]: 2025-05-17 00:10:17.380 [INFO][5527] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9" May 17 00:10:17.383571 containerd[1478]: time="2025-05-17T00:10:17.383141842Z" level=info msg="TearDown network for sandbox \"405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9\" successfully" May 17 00:10:17.389619 containerd[1478]: time="2025-05-17T00:10:17.389526961Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:10:17.389619 containerd[1478]: time="2025-05-17T00:10:17.389608442Z" level=info msg="RemovePodSandbox \"405e8a3eaca632c9d44234624a891830a4e09e8181ceded076e4a57a372e98f9\" returns successfully" May 17 00:10:21.536780 containerd[1478]: time="2025-05-17T00:10:21.536681121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:10:21.762868 containerd[1478]: time="2025-05-17T00:10:21.762654723Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:10:21.765127 containerd[1478]: time="2025-05-17T00:10:21.764849764Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:10:21.765127 containerd[1478]: time="2025-05-17T00:10:21.764917805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:10:21.765362 kubelet[2663]: E0517 00:10:21.765175 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:10:21.765362 kubelet[2663]: E0517 00:10:21.765220 2663 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:10:21.766134 kubelet[2663]: E0517 00:10:21.765350 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhfxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-gp74j_calico-system(662625c5-921c-406a-9ef1-d2e70e33e339): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:10:21.766932 kubelet[2663]: E0517 00:10:21.766809 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gp74j" podUID="662625c5-921c-406a-9ef1-d2e70e33e339" May 17 00:10:22.525165 kubelet[2663]: E0517 00:10:22.525089 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57fb66bd94-5kh2z" podUID="2bf795f3-2e82-40ad-8128-ea6e3a8aa689" May 17 00:10:25.157642 kubelet[2663]: I0517 00:10:25.157123 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:10:34.523233 containerd[1478]: time="2025-05-17T00:10:34.523138680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:10:34.767533 containerd[1478]: time="2025-05-17T00:10:34.767476073Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:10:34.769223 containerd[1478]: time="2025-05-17T00:10:34.769139785Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:10:34.769359 containerd[1478]: time="2025-05-17T00:10:34.769261987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:10:34.769575 kubelet[2663]: E0517 00:10:34.769505 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:10:34.771531 kubelet[2663]: E0517 00:10:34.769585 2663 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:10:34.771531 kubelet[2663]: E0517 00:10:34.770321 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:75cc68a695ad45e1ba6d530375dd3c59,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n5z7t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57fb66bd94-5kh2z_calico-system(2bf795f3-2e82-40ad-8128-ea6e3a8aa689): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:10:34.772894 containerd[1478]: time="2025-05-17T00:10:34.772858416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:10:35.006761 containerd[1478]: time="2025-05-17T00:10:35.006703808Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:10:35.011801 containerd[1478]: time="2025-05-17T00:10:35.011649103Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:10:35.011922 containerd[1478]: time="2025-05-17T00:10:35.011840187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:10:35.012267 kubelet[2663]: E0517 00:10:35.012053 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:10:35.012380 kubelet[2663]: E0517 00:10:35.012271 2663 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:10:35.012564 kubelet[2663]: E0517 00:10:35.012509 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5z7t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57fb66bd94-5kh2z_calico-system(2bf795f3-2e82-40ad-8128-ea6e3a8aa689): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:10:35.013673 kubelet[2663]: E0517 00:10:35.013627 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57fb66bd94-5kh2z" podUID="2bf795f3-2e82-40ad-8128-ea6e3a8aa689" May 17 00:10:35.523275 kubelet[2663]: E0517 00:10:35.523156 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gp74j" podUID="662625c5-921c-406a-9ef1-d2e70e33e339" May 17 00:10:46.523743 kubelet[2663]: E0517 00:10:46.523624 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57fb66bd94-5kh2z" podUID="2bf795f3-2e82-40ad-8128-ea6e3a8aa689" May 17 00:10:49.522893 containerd[1478]: time="2025-05-17T00:10:49.522651531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:10:49.764211 containerd[1478]: time="2025-05-17T00:10:49.763955983Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:10:49.765731 containerd[1478]: time="2025-05-17T00:10:49.765575452Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:10:49.765731 containerd[1478]: time="2025-05-17T00:10:49.765697974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:10:49.765919 kubelet[2663]: E0517 00:10:49.765883 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:10:49.766420 kubelet[2663]: E0517 00:10:49.765937 2663 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:10:49.766420 kubelet[2663]: E0517 00:10:49.766105 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhfxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-gp74j_calico-system(662625c5-921c-406a-9ef1-d2e70e33e339): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:10:49.768130 kubelet[2663]: E0517 00:10:49.768082 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gp74j" podUID="662625c5-921c-406a-9ef1-d2e70e33e339" May 17 00:11:00.523130 kubelet[2663]: E0517 00:11:00.523044 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57fb66bd94-5kh2z" podUID="2bf795f3-2e82-40ad-8128-ea6e3a8aa689" May 17 00:11:01.522536 kubelet[2663]: E0517 00:11:01.521629 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gp74j" podUID="662625c5-921c-406a-9ef1-d2e70e33e339" May 17 00:11:06.901904 systemd[1]: run-containerd-runc-k8s.io-16b27db4f29ec172a1b389fd4655d4c00a2ed4f2c932d251341539949175a6bf-runc.8aaAbN.mount: Deactivated successfully. May 17 00:11:15.527237 kubelet[2663]: E0517 00:11:15.526611 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gp74j" podUID="662625c5-921c-406a-9ef1-d2e70e33e339" May 17 00:11:15.528598 containerd[1478]: time="2025-05-17T00:11:15.526924933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:11:15.766520 containerd[1478]: time="2025-05-17T00:11:15.766395026Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:11:15.768458 containerd[1478]: time="2025-05-17T00:11:15.768375622Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:11:15.768602 containerd[1478]: time="2025-05-17T00:11:15.768531585Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:11:15.768813 kubelet[2663]: E0517 00:11:15.768764 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:11:15.768941 kubelet[2663]: E0517 00:11:15.768826 2663 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:11:15.769004 kubelet[2663]: E0517 00:11:15.768948 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:75cc68a695ad45e1ba6d530375dd3c59,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n5z7t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57fb66bd94-5kh2z_calico-system(2bf795f3-2e82-40ad-8128-ea6e3a8aa689): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:11:15.772807 containerd[1478]: time="2025-05-17T00:11:15.772761782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:11:16.009385 containerd[1478]: time="2025-05-17T00:11:16.008342564Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:11:16.010918 containerd[1478]: time="2025-05-17T00:11:16.010844129Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:11:16.011210 containerd[1478]: time="2025-05-17T00:11:16.010855170Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:11:16.011332 kubelet[2663]: E0517 00:11:16.011289 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:11:16.012121 kubelet[2663]: E0517 00:11:16.012063 2663 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:11:16.012237 kubelet[2663]: E0517 00:11:16.012203 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5z7t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57fb66bd94-5kh2z_calico-system(2bf795f3-2e82-40ad-8128-ea6e3a8aa689): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:11:16.013778 kubelet[2663]: E0517 00:11:16.013347 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57fb66bd94-5kh2z" podUID="2bf795f3-2e82-40ad-8128-ea6e3a8aa689" May 17 00:11:28.521117 kubelet[2663]: E0517 00:11:28.520945 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gp74j" podUID="662625c5-921c-406a-9ef1-d2e70e33e339" May 17 00:11:30.521928 kubelet[2663]: E0517 00:11:30.521773 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57fb66bd94-5kh2z" podUID="2bf795f3-2e82-40ad-8128-ea6e3a8aa689" May 17 00:11:40.521046 containerd[1478]: time="2025-05-17T00:11:40.520968166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:11:40.769628 containerd[1478]: time="2025-05-17T00:11:40.769545407Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:11:40.772708 containerd[1478]: time="2025-05-17T00:11:40.772545023Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:11:40.772960 containerd[1478]: time="2025-05-17T00:11:40.772689946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:11:40.773038 kubelet[2663]: E0517 00:11:40.772903 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:11:40.773038 kubelet[2663]: E0517 00:11:40.772956 2663 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:11:40.774018 kubelet[2663]: E0517 00:11:40.773118 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhfxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-gp74j_calico-system(662625c5-921c-406a-9ef1-d2e70e33e339): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:11:40.774740 kubelet[2663]: E0517 00:11:40.774691 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gp74j" podUID="662625c5-921c-406a-9ef1-d2e70e33e339" May 17 00:11:43.522601 kubelet[2663]: E0517 00:11:43.522458 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57fb66bd94-5kh2z" podUID="2bf795f3-2e82-40ad-8128-ea6e3a8aa689" May 17 00:11:51.523578 kubelet[2663]: E0517 00:11:51.523210 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gp74j" podUID="662625c5-921c-406a-9ef1-d2e70e33e339" May 17 00:11:54.522472 kubelet[2663]: E0517 00:11:54.522370 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57fb66bd94-5kh2z" podUID="2bf795f3-2e82-40ad-8128-ea6e3a8aa689" May 17 00:12:02.523139 kubelet[2663]: E0517 00:12:02.523064 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gp74j" podUID="662625c5-921c-406a-9ef1-d2e70e33e339" May 17 00:12:06.521363 kubelet[2663]: E0517 00:12:06.521314 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57fb66bd94-5kh2z" podUID="2bf795f3-2e82-40ad-8128-ea6e3a8aa689" May 17 00:12:16.520656 kubelet[2663]: E0517 00:12:16.520121 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gp74j" podUID="662625c5-921c-406a-9ef1-d2e70e33e339" May 17 00:12:17.523479 kubelet[2663]: E0517 00:12:17.523046 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57fb66bd94-5kh2z" podUID="2bf795f3-2e82-40ad-8128-ea6e3a8aa689" May 17 00:12:28.523050 kubelet[2663]: E0517 00:12:28.521252 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gp74j" podUID="662625c5-921c-406a-9ef1-d2e70e33e339" May 17 00:12:29.522290 kubelet[2663]: E0517 00:12:29.522031 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57fb66bd94-5kh2z" podUID="2bf795f3-2e82-40ad-8128-ea6e3a8aa689" May 17 00:12:42.522231 containerd[1478]: time="2025-05-17T00:12:42.521946430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:12:42.752908 containerd[1478]: time="2025-05-17T00:12:42.752650260Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:12:42.754396 containerd[1478]: time="2025-05-17T00:12:42.754241171Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:12:42.754600 containerd[1478]: time="2025-05-17T00:12:42.754374413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:12:42.754716 kubelet[2663]: E0517 00:12:42.754627 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:12:42.755312 kubelet[2663]: E0517 00:12:42.754704 2663 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:12:42.755312 kubelet[2663]: E0517 00:12:42.754883 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:75cc68a695ad45e1ba6d530375dd3c59,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n5z7t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57fb66bd94-5kh2z_calico-system(2bf795f3-2e82-40ad-8128-ea6e3a8aa689): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:12:42.757379 containerd[1478]: time="2025-05-17T00:12:42.757346390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:12:42.987393 containerd[1478]: time="2025-05-17T00:12:42.986981840Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:12:42.989676 containerd[1478]: time="2025-05-17T00:12:42.989624931Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:12:42.990390 containerd[1478]: time="2025-05-17T00:12:42.989576610Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:12:42.990710 kubelet[2663]: E0517 00:12:42.990327 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:12:42.990710 kubelet[2663]: E0517 00:12:42.990673 2663 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:12:42.991371 kubelet[2663]: E0517 00:12:42.991068 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5z7t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57fb66bd94-5kh2z_calico-system(2bf795f3-2e82-40ad-8128-ea6e3a8aa689): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:12:42.992716 kubelet[2663]: E0517 00:12:42.992642 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57fb66bd94-5kh2z" podUID="2bf795f3-2e82-40ad-8128-ea6e3a8aa689" May 17 00:12:43.521391 kubelet[2663]: E0517 00:12:43.521277 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gp74j" podUID="662625c5-921c-406a-9ef1-d2e70e33e339" May 17 00:12:55.524969 kubelet[2663]: E0517 00:12:55.524859 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57fb66bd94-5kh2z" podUID="2bf795f3-2e82-40ad-8128-ea6e3a8aa689" May 17 00:12:56.520850 kubelet[2663]: E0517 00:12:56.520585 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gp74j" podUID="662625c5-921c-406a-9ef1-d2e70e33e339" May 17 00:13:09.527433 kubelet[2663]: E0517 00:13:09.527316 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57fb66bd94-5kh2z" podUID="2bf795f3-2e82-40ad-8128-ea6e3a8aa689" May 17 00:13:11.525666 containerd[1478]: time="2025-05-17T00:13:11.525626658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:13:11.772258 containerd[1478]: time="2025-05-17T00:13:11.772015058Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:13:11.773868 containerd[1478]: time="2025-05-17T00:13:11.773739571Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:13:11.774078 containerd[1478]: time="2025-05-17T00:13:11.773820973Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:13:11.774355 kubelet[2663]: E0517 00:13:11.774271 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:13:11.774730 kubelet[2663]: E0517 00:13:11.774357 2663 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:13:11.774730 kubelet[2663]: E0517 00:13:11.774566 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhfxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-gp74j_calico-system(662625c5-921c-406a-9ef1-d2e70e33e339): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:13:11.776209 kubelet[2663]: E0517 00:13:11.776072 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gp74j" podUID="662625c5-921c-406a-9ef1-d2e70e33e339" May 17 00:13:23.521385 kubelet[2663]: E0517 00:13:23.521158 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gp74j" podUID="662625c5-921c-406a-9ef1-d2e70e33e339" May 17 00:13:24.523752 kubelet[2663]: E0517 00:13:24.523322 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57fb66bd94-5kh2z" podUID="2bf795f3-2e82-40ad-8128-ea6e3a8aa689" May 17 00:13:35.523285 kubelet[2663]: E0517 00:13:35.523212 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gp74j" podUID="662625c5-921c-406a-9ef1-d2e70e33e339" May 17 00:13:39.530320 kubelet[2663]: E0517 00:13:39.530270 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57fb66bd94-5kh2z" podUID="2bf795f3-2e82-40ad-8128-ea6e3a8aa689" May 17 00:13:48.520633 kubelet[2663]: E0517 00:13:48.520573 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gp74j" podUID="662625c5-921c-406a-9ef1-d2e70e33e339" May 17 00:13:51.522302 kubelet[2663]: E0517 00:13:51.522158 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57fb66bd94-5kh2z" podUID="2bf795f3-2e82-40ad-8128-ea6e3a8aa689" May 17 00:14:02.524457 kubelet[2663]: E0517 00:14:02.523783 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57fb66bd94-5kh2z" podUID="2bf795f3-2e82-40ad-8128-ea6e3a8aa689" May 17 00:14:02.525939 kubelet[2663]: E0517 00:14:02.525607 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gp74j" podUID="662625c5-921c-406a-9ef1-d2e70e33e339" May 17 00:14:07.197797 systemd[1]: Started sshd@7-168.119.99.67:22-139.178.68.195:53962.service - OpenSSH per-connection server daemon (139.178.68.195:53962). May 17 00:14:08.203629 sshd[6026]: Accepted publickey for core from 139.178.68.195 port 53962 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:08.206555 sshd[6026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:08.214322 systemd-logind[1457]: New session 8 of user core. May 17 00:14:08.216651 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 00:14:08.993156 sshd[6026]: pam_unix(sshd:session): session closed for user core May 17 00:14:08.998735 systemd-logind[1457]: Session 8 logged out. Waiting for processes to exit. May 17 00:14:08.999565 systemd[1]: sshd@7-168.119.99.67:22-139.178.68.195:53962.service: Deactivated successfully. May 17 00:14:09.003705 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:14:09.005589 systemd-logind[1457]: Removed session 8. May 17 00:14:13.521067 kubelet[2663]: E0517 00:14:13.520646 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gp74j" podUID="662625c5-921c-406a-9ef1-d2e70e33e339" May 17 00:14:14.170838 systemd[1]: Started sshd@8-168.119.99.67:22-139.178.68.195:48092.service - OpenSSH per-connection server daemon (139.178.68.195:48092). May 17 00:14:15.164086 sshd[6051]: Accepted publickey for core from 139.178.68.195 port 48092 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:15.167093 sshd[6051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:15.172171 systemd-logind[1457]: New session 9 of user core. May 17 00:14:15.178871 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 00:14:15.955838 sshd[6051]: pam_unix(sshd:session): session closed for user core May 17 00:14:15.960980 systemd[1]: sshd@8-168.119.99.67:22-139.178.68.195:48092.service: Deactivated successfully. May 17 00:14:15.963275 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:14:15.966101 systemd-logind[1457]: Session 9 logged out. Waiting for processes to exit. May 17 00:14:15.967448 systemd-logind[1457]: Removed session 9. May 17 00:14:16.147061 systemd[1]: Started sshd@9-168.119.99.67:22-139.178.68.195:48106.service - OpenSSH per-connection server daemon (139.178.68.195:48106). May 17 00:14:16.522163 kubelet[2663]: E0517 00:14:16.522121 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57fb66bd94-5kh2z" podUID="2bf795f3-2e82-40ad-8128-ea6e3a8aa689" May 17 00:14:17.149958 sshd[6085]: Accepted publickey for core from 139.178.68.195 port 48106 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:17.152074 sshd[6085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:17.158227 systemd-logind[1457]: New session 10 of user core. May 17 00:14:17.166775 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 00:14:17.971384 sshd[6085]: pam_unix(sshd:session): session closed for user core May 17 00:14:17.977067 systemd-logind[1457]: Session 10 logged out. Waiting for processes to exit. May 17 00:14:17.977629 systemd[1]: sshd@9-168.119.99.67:22-139.178.68.195:48106.service: Deactivated successfully. May 17 00:14:17.981730 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:14:17.984577 systemd-logind[1457]: Removed session 10. May 17 00:14:18.153026 systemd[1]: Started sshd@10-168.119.99.67:22-139.178.68.195:48118.service - OpenSSH per-connection server daemon (139.178.68.195:48118). May 17 00:14:19.144337 sshd[6096]: Accepted publickey for core from 139.178.68.195 port 48118 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:19.146551 sshd[6096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:19.152106 systemd-logind[1457]: New session 11 of user core. May 17 00:14:19.157739 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 00:14:19.923032 sshd[6096]: pam_unix(sshd:session): session closed for user core May 17 00:14:19.929909 systemd[1]: sshd@10-168.119.99.67:22-139.178.68.195:48118.service: Deactivated successfully. May 17 00:14:19.930116 systemd-logind[1457]: Session 11 logged out. Waiting for processes to exit. May 17 00:14:19.934468 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:14:19.936067 systemd-logind[1457]: Removed session 11. May 17 00:14:25.102965 systemd[1]: Started sshd@11-168.119.99.67:22-139.178.68.195:35880.service - OpenSSH per-connection server daemon (139.178.68.195:35880). May 17 00:14:26.079691 sshd[6115]: Accepted publickey for core from 139.178.68.195 port 35880 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:26.080880 sshd[6115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:26.089062 systemd-logind[1457]: New session 12 of user core. May 17 00:14:26.102889 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 00:14:26.841362 sshd[6115]: pam_unix(sshd:session): session closed for user core May 17 00:14:26.846269 systemd-logind[1457]: Session 12 logged out. Waiting for processes to exit. May 17 00:14:26.847241 systemd[1]: sshd@11-168.119.99.67:22-139.178.68.195:35880.service: Deactivated successfully. May 17 00:14:26.850177 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:14:26.851959 systemd-logind[1457]: Removed session 12. May 17 00:14:28.522580 kubelet[2663]: E0517 00:14:28.521822 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gp74j" podUID="662625c5-921c-406a-9ef1-d2e70e33e339" May 17 00:14:31.523635 kubelet[2663]: E0517 00:14:31.523211 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57fb66bd94-5kh2z" podUID="2bf795f3-2e82-40ad-8128-ea6e3a8aa689" May 17 00:14:32.028819 systemd[1]: Started sshd@12-168.119.99.67:22-139.178.68.195:35890.service - OpenSSH per-connection server daemon (139.178.68.195:35890). May 17 00:14:33.023524 sshd[6128]: Accepted publickey for core from 139.178.68.195 port 35890 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:33.026827 sshd[6128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:33.036652 systemd-logind[1457]: New session 13 of user core. May 17 00:14:33.044500 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 00:14:33.800210 sshd[6128]: pam_unix(sshd:session): session closed for user core May 17 00:14:33.805745 systemd[1]: sshd@12-168.119.99.67:22-139.178.68.195:35890.service: Deactivated successfully. May 17 00:14:33.808587 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:14:33.810086 systemd-logind[1457]: Session 13 logged out. Waiting for processes to exit. May 17 00:14:33.811373 systemd-logind[1457]: Removed session 13. May 17 00:14:38.972954 systemd[1]: Started sshd@13-168.119.99.67:22-139.178.68.195:55780.service - OpenSSH per-connection server daemon (139.178.68.195:55780). May 17 00:14:39.955867 sshd[6201]: Accepted publickey for core from 139.178.68.195 port 55780 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:39.958873 sshd[6201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:39.969976 systemd-logind[1457]: New session 14 of user core. May 17 00:14:39.976672 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 00:14:40.705447 sshd[6201]: pam_unix(sshd:session): session closed for user core May 17 00:14:40.713010 systemd[1]: sshd@13-168.119.99.67:22-139.178.68.195:55780.service: Deactivated successfully. May 17 00:14:40.719798 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:14:40.724230 systemd-logind[1457]: Session 14 logged out. Waiting for processes to exit. May 17 00:14:40.726007 systemd-logind[1457]: Removed session 14. May 17 00:14:40.886286 systemd[1]: Started sshd@14-168.119.99.67:22-139.178.68.195:55796.service - OpenSSH per-connection server daemon (139.178.68.195:55796). May 17 00:14:41.884386 sshd[6214]: Accepted publickey for core from 139.178.68.195 port 55796 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:41.887414 sshd[6214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:41.895507 systemd-logind[1457]: New session 15 of user core. May 17 00:14:41.901898 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 00:14:42.522151 kubelet[2663]: E0517 00:14:42.522030 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gp74j" podUID="662625c5-921c-406a-9ef1-d2e70e33e339" May 17 00:14:42.523868 kubelet[2663]: E0517 00:14:42.523769 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57fb66bd94-5kh2z" podUID="2bf795f3-2e82-40ad-8128-ea6e3a8aa689" May 17 00:14:42.827293 sshd[6214]: pam_unix(sshd:session): session closed for user core May 17 00:14:42.833139 systemd[1]: sshd@14-168.119.99.67:22-139.178.68.195:55796.service: Deactivated successfully. May 17 00:14:42.838566 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:14:42.841240 systemd-logind[1457]: Session 15 logged out. Waiting for processes to exit. May 17 00:14:42.843469 systemd-logind[1457]: Removed session 15. May 17 00:14:43.009841 systemd[1]: Started sshd@15-168.119.99.67:22-139.178.68.195:55804.service - OpenSSH per-connection server daemon (139.178.68.195:55804). May 17 00:14:44.020910 sshd[6225]: Accepted publickey for core from 139.178.68.195 port 55804 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:44.023245 sshd[6225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:44.029207 systemd-logind[1457]: New session 16 of user core. May 17 00:14:44.035662 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 00:14:45.747632 sshd[6225]: pam_unix(sshd:session): session closed for user core May 17 00:14:45.752736 systemd-logind[1457]: Session 16 logged out. Waiting for processes to exit. May 17 00:14:45.753350 systemd[1]: sshd@15-168.119.99.67:22-139.178.68.195:55804.service: Deactivated successfully. May 17 00:14:45.757042 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:14:45.758503 systemd-logind[1457]: Removed session 16. May 17 00:14:45.927959 systemd[1]: Started sshd@16-168.119.99.67:22-139.178.68.195:55138.service - OpenSSH per-connection server daemon (139.178.68.195:55138). May 17 00:14:46.925571 sshd[6244]: Accepted publickey for core from 139.178.68.195 port 55138 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:46.927683 sshd[6244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:46.934186 systemd-logind[1457]: New session 17 of user core. May 17 00:14:46.939668 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 00:14:47.828334 sshd[6244]: pam_unix(sshd:session): session closed for user core May 17 00:14:47.832749 systemd[1]: sshd@16-168.119.99.67:22-139.178.68.195:55138.service: Deactivated successfully. May 17 00:14:47.836054 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:14:47.837224 systemd-logind[1457]: Session 17 logged out. Waiting for processes to exit. May 17 00:14:47.839619 systemd-logind[1457]: Removed session 17. May 17 00:14:47.871874 update_engine[1459]: I20250517 00:14:47.870563 1459 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 17 00:14:47.871874 update_engine[1459]: I20250517 00:14:47.870621 1459 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 17 00:14:47.871874 update_engine[1459]: I20250517 00:14:47.870954 1459 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 17 00:14:47.871874 update_engine[1459]: I20250517 00:14:47.871428 1459 omaha_request_params.cc:62] Current group set to lts May 17 00:14:47.871874 update_engine[1459]: I20250517 00:14:47.871582 1459 update_attempter.cc:499] Already updated boot flags. Skipping. May 17 00:14:47.871874 update_engine[1459]: I20250517 00:14:47.871598 1459 update_attempter.cc:643] Scheduling an action processor start. May 17 00:14:47.871874 update_engine[1459]: I20250517 00:14:47.871619 1459 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 17 00:14:47.878417 update_engine[1459]: I20250517 00:14:47.877660 1459 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 17 00:14:47.878417 update_engine[1459]: I20250517 00:14:47.877810 1459 omaha_request_action.cc:271] Posting an Omaha request to disabled May 17 00:14:47.878417 update_engine[1459]: I20250517 00:14:47.877822 1459 omaha_request_action.cc:272] Request: May 17 00:14:47.878417 update_engine[1459]: May 17 00:14:47.878417 update_engine[1459]: May 17 00:14:47.878417 update_engine[1459]: May 17 00:14:47.878417 update_engine[1459]: May 17 00:14:47.878417 update_engine[1459]: May 17 00:14:47.878417 update_engine[1459]: May 17 00:14:47.878417 update_engine[1459]: May 17 00:14:47.878417 update_engine[1459]: May 17 00:14:47.878417 update_engine[1459]: I20250517 00:14:47.877831 1459 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:14:47.879571 locksmithd[1488]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 17 00:14:47.882533 update_engine[1459]: I20250517 00:14:47.882421 1459 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:14:47.883153 update_engine[1459]: I20250517 00:14:47.883090 1459 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:14:47.884545 update_engine[1459]: E20250517 00:14:47.884486 1459 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:14:47.884650 update_engine[1459]: I20250517 00:14:47.884601 1459 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 17 00:14:48.008300 systemd[1]: Started sshd@17-168.119.99.67:22-139.178.68.195:55140.service - OpenSSH per-connection server daemon (139.178.68.195:55140). May 17 00:14:48.995116 sshd[6255]: Accepted publickey for core from 139.178.68.195 port 55140 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:48.997464 sshd[6255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:49.005068 systemd-logind[1457]: New session 18 of user core. May 17 00:14:49.009696 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 00:14:49.754207 sshd[6255]: pam_unix(sshd:session): session closed for user core May 17 00:14:49.759912 systemd-logind[1457]: Session 18 logged out. Waiting for processes to exit. May 17 00:14:49.760454 systemd[1]: sshd@17-168.119.99.67:22-139.178.68.195:55140.service: Deactivated successfully. May 17 00:14:49.764947 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:14:49.766737 systemd-logind[1457]: Removed session 18. May 17 00:14:54.934844 systemd[1]: Started sshd@18-168.119.99.67:22-139.178.68.195:59300.service - OpenSSH per-connection server daemon (139.178.68.195:59300). May 17 00:14:55.524945 kubelet[2663]: E0517 00:14:55.524791 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57fb66bd94-5kh2z" podUID="2bf795f3-2e82-40ad-8128-ea6e3a8aa689" May 17 00:14:55.947908 sshd[6272]: Accepted publickey for core from 139.178.68.195 port 59300 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:55.949974 sshd[6272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:55.956239 systemd-logind[1457]: New session 19 of user core. May 17 00:14:55.964736 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 00:14:56.520230 kubelet[2663]: E0517 00:14:56.520171 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gp74j" podUID="662625c5-921c-406a-9ef1-d2e70e33e339" May 17 00:14:56.718935 sshd[6272]: pam_unix(sshd:session): session closed for user core May 17 00:14:56.724861 systemd[1]: sshd@18-168.119.99.67:22-139.178.68.195:59300.service: Deactivated successfully. May 17 00:14:56.726662 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:14:56.727722 systemd-logind[1457]: Session 19 logged out. Waiting for processes to exit. May 17 00:14:56.728930 systemd-logind[1457]: Removed session 19. May 17 00:14:57.871550 update_engine[1459]: I20250517 00:14:57.871425 1459 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:14:57.872009 update_engine[1459]: I20250517 00:14:57.871846 1459 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:14:57.872217 update_engine[1459]: I20250517 00:14:57.872159 1459 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:14:57.872984 update_engine[1459]: E20250517 00:14:57.872910 1459 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:14:57.873085 update_engine[1459]: I20250517 00:14:57.872996 1459 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 17 00:15:01.899016 systemd[1]: Started sshd@19-168.119.99.67:22-139.178.68.195:59312.service - OpenSSH per-connection server daemon (139.178.68.195:59312). May 17 00:15:02.381191 systemd[1]: run-containerd-runc-k8s.io-9fba8f4a00bb9ea64c1ca510402ea2c1b7269e2b5fe5773b4330d5df902fc26b-runc.c8XOWV.mount: Deactivated successfully. May 17 00:15:02.895836 sshd[6286]: Accepted publickey for core from 139.178.68.195 port 59312 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:15:02.899786 sshd[6286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:15:02.905010 systemd-logind[1457]: New session 20 of user core. May 17 00:15:02.910645 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 00:15:03.688784 sshd[6286]: pam_unix(sshd:session): session closed for user core May 17 00:15:03.694156 systemd[1]: sshd@19-168.119.99.67:22-139.178.68.195:59312.service: Deactivated successfully. May 17 00:15:03.697321 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:15:03.699312 systemd-logind[1457]: Session 20 logged out. Waiting for processes to exit. May 17 00:15:03.703056 systemd-logind[1457]: Removed session 20. May 17 00:15:07.522321 kubelet[2663]: E0517 00:15:07.521992 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gp74j" podUID="662625c5-921c-406a-9ef1-d2e70e33e339" May 17 00:15:07.871023 update_engine[1459]: I20250517 00:15:07.870661 1459 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:15:07.871686 update_engine[1459]: I20250517 00:15:07.871109 1459 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:15:07.871686 update_engine[1459]: I20250517 00:15:07.871407 1459 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:15:07.872851 update_engine[1459]: E20250517 00:15:07.872583 1459 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:15:07.872851 update_engine[1459]: I20250517 00:15:07.872709 1459 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 17 00:15:08.522132 kubelet[2663]: E0517 00:15:08.522066 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57fb66bd94-5kh2z" podUID="2bf795f3-2e82-40ad-8128-ea6e3a8aa689" May 17 00:15:15.202506 systemd[1]: run-containerd-runc-k8s.io-16b27db4f29ec172a1b389fd4655d4c00a2ed4f2c932d251341539949175a6bf-runc.qILHAr.mount: Deactivated successfully. May 17 00:15:17.877580 update_engine[1459]: I20250517 00:15:17.877008 1459 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:15:17.877580 update_engine[1459]: I20250517 00:15:17.877477 1459 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:15:17.878117 update_engine[1459]: I20250517 00:15:17.877805 1459 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:15:17.878871 update_engine[1459]: E20250517 00:15:17.878799 1459 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:15:17.878991 update_engine[1459]: I20250517 00:15:17.878908 1459 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 17 00:15:17.878991 update_engine[1459]: I20250517 00:15:17.878928 1459 omaha_request_action.cc:617] Omaha request response: May 17 00:15:17.879075 update_engine[1459]: E20250517 00:15:17.879048 1459 omaha_request_action.cc:636] Omaha request network transfer failed. May 17 00:15:17.879131 update_engine[1459]: I20250517 00:15:17.879084 1459 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 17 00:15:17.879131 update_engine[1459]: I20250517 00:15:17.879098 1459 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 00:15:17.879131 update_engine[1459]: I20250517 00:15:17.879109 1459 update_attempter.cc:306] Processing Done. May 17 00:15:17.879223 update_engine[1459]: E20250517 00:15:17.879132 1459 update_attempter.cc:619] Update failed. May 17 00:15:17.879223 update_engine[1459]: I20250517 00:15:17.879143 1459 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 17 00:15:17.879223 update_engine[1459]: I20250517 00:15:17.879154 1459 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 17 00:15:17.879223 update_engine[1459]: I20250517 00:15:17.879165 1459 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 17 00:15:17.879693 update_engine[1459]: I20250517 00:15:17.879540 1459 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 17 00:15:17.879693 update_engine[1459]: I20250517 00:15:17.879603 1459 omaha_request_action.cc:271] Posting an Omaha request to disabled May 17 00:15:17.879693 update_engine[1459]: I20250517 00:15:17.879623 1459 omaha_request_action.cc:272] Request: May 17 00:15:17.879693 update_engine[1459]: May 17 00:15:17.879693 update_engine[1459]: May 17 00:15:17.879693 update_engine[1459]: May 17 00:15:17.879693 update_engine[1459]: May 17 00:15:17.879693 update_engine[1459]: May 17 00:15:17.879693 update_engine[1459]: May 17 00:15:17.879693 update_engine[1459]: I20250517 00:15:17.879637 1459 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:15:17.879988 locksmithd[1488]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 17 00:15:17.880429 update_engine[1459]: I20250517 00:15:17.879895 1459 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:15:17.880429 update_engine[1459]: I20250517 00:15:17.880144 1459 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:15:17.881023 update_engine[1459]: E20250517 00:15:17.880950 1459 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:15:17.881093 update_engine[1459]: I20250517 00:15:17.881033 1459 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 17 00:15:17.881093 update_engine[1459]: I20250517 00:15:17.881048 1459 omaha_request_action.cc:617] Omaha request response: May 17 00:15:17.881093 update_engine[1459]: I20250517 00:15:17.881060 1459 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 00:15:17.881093 update_engine[1459]: I20250517 00:15:17.881068 1459 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 00:15:17.881093 update_engine[1459]: I20250517 00:15:17.881076 1459 update_attempter.cc:306] Processing Done. May 17 00:15:17.881093 update_engine[1459]: I20250517 00:15:17.881084 1459 update_attempter.cc:310] Error event sent. May 17 00:15:17.881239 update_engine[1459]: I20250517 00:15:17.881098 1459 update_check_scheduler.cc:74] Next update check in 42m46s May 17 00:15:17.881481 locksmithd[1488]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 17 00:15:21.520723 kubelet[2663]: E0517 00:15:21.520283 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gp74j" podUID="662625c5-921c-406a-9ef1-d2e70e33e339" May 17 00:15:22.522684 kubelet[2663]: E0517 00:15:22.522116 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57fb66bd94-5kh2z" podUID="2bf795f3-2e82-40ad-8128-ea6e3a8aa689"