May 17 00:26:03.834536 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 22:44:56 -00 2025 May 17 00:26:03.834555 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:26:03.834563 kernel: BIOS-provided physical RAM map: May 17 00:26:03.834568 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 17 00:26:03.834572 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 17 00:26:03.834577 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 17 00:26:03.834582 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable May 17 00:26:03.834587 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved May 17 00:26:03.834592 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 17 00:26:03.834597 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 17 00:26:03.834602 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 17 00:26:03.834606 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 17 00:26:03.834611 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 17 00:26:03.834615 kernel: NX (Execute Disable) protection: active May 17 00:26:03.834622 kernel: APIC: Static calls initialized May 17 00:26:03.834627 kernel: SMBIOS 3.0.0 present. May 17 00:26:03.834632 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 May 17 00:26:03.834637 kernel: Hypervisor detected: KVM May 17 00:26:03.834642 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:26:03.834647 kernel: kvm-clock: using sched offset of 3128730770 cycles May 17 00:26:03.834652 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:26:03.834657 kernel: tsc: Detected 2445.406 MHz processor May 17 00:26:03.834663 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:26:03.834669 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:26:03.834674 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 May 17 00:26:03.834679 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 17 00:26:03.834684 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:26:03.834689 kernel: Using GB pages for direct mapping May 17 00:26:03.834694 kernel: ACPI: Early table checksum verification disabled May 17 00:26:03.834699 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) May 17 00:26:03.834704 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:26:03.834709 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:26:03.834716 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:26:03.834720 kernel: ACPI: FACS 0x000000007CFE0000 000040 May 17 00:26:03.834725 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:26:03.834730 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:26:03.834735 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:26:03.834740 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:26:03.834745 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] May 17 00:26:03.834750 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] May 17 00:26:03.834759 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] May 17 00:26:03.834764 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] May 17 00:26:03.834856 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] May 17 00:26:03.834862 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] May 17 00:26:03.834867 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] May 17 00:26:03.834872 kernel: No NUMA configuration found May 17 00:26:03.834880 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] May 17 00:26:03.834885 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] May 17 00:26:03.834891 kernel: Zone ranges: May 17 00:26:03.834896 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:26:03.834902 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] May 17 00:26:03.834907 kernel: Normal empty May 17 00:26:03.834912 kernel: Movable zone start for each node May 17 00:26:03.834917 kernel: Early memory node ranges May 17 00:26:03.834933 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 17 00:26:03.834939 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] May 17 00:26:03.834945 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] May 17 00:26:03.834951 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:26:03.834956 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 17 00:26:03.834961 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 17 00:26:03.834966 kernel: ACPI: PM-Timer IO Port: 0x608 May 17 00:26:03.834972 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:26:03.834977 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 17 00:26:03.834982 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 00:26:03.834988 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:26:03.834994 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:26:03.834999 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:26:03.835005 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:26:03.835010 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:26:03.835015 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:26:03.835021 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 00:26:03.835026 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 17 00:26:03.835031 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 17 00:26:03.835036 kernel: Booting paravirtualized kernel on KVM May 17 00:26:03.835043 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:26:03.835049 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 17 00:26:03.835054 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 17 00:26:03.835059 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 17 00:26:03.835065 kernel: pcpu-alloc: [0] 0 1 May 17 00:26:03.835070 kernel: kvm-guest: PV spinlocks disabled, no host support May 17 00:26:03.835076 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:26:03.835082 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:26:03.835088 kernel: random: crng init done May 17 00:26:03.835094 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:26:03.835099 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 17 00:26:03.835104 kernel: Fallback order for Node 0: 0 May 17 00:26:03.835110 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 May 17 00:26:03.835115 kernel: Policy zone: DMA32 May 17 00:26:03.835120 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:26:03.835126 kernel: Memory: 1922052K/2047464K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42872K init, 2320K bss, 125152K reserved, 0K cma-reserved) May 17 00:26:03.835131 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:26:03.835137 kernel: ftrace: allocating 37948 entries in 149 pages May 17 00:26:03.835143 kernel: ftrace: allocated 149 pages with 4 groups May 17 00:26:03.835148 kernel: Dynamic Preempt: voluntary May 17 00:26:03.835153 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:26:03.835159 kernel: rcu: RCU event tracing is enabled. May 17 00:26:03.835165 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:26:03.835170 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:26:03.835670 kernel: Rude variant of Tasks RCU enabled. May 17 00:26:03.835680 kernel: Tracing variant of Tasks RCU enabled. May 17 00:26:03.835688 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:26:03.835694 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:26:03.835699 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 17 00:26:03.835704 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:26:03.835710 kernel: Console: colour VGA+ 80x25 May 17 00:26:03.835715 kernel: printk: console [tty0] enabled May 17 00:26:03.835721 kernel: printk: console [ttyS0] enabled May 17 00:26:03.835726 kernel: ACPI: Core revision 20230628 May 17 00:26:03.835732 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 17 00:26:03.835739 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:26:03.835744 kernel: x2apic enabled May 17 00:26:03.835750 kernel: APIC: Switched APIC routing to: physical x2apic May 17 00:26:03.835755 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 17 00:26:03.835760 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 17 00:26:03.835780 kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406) May 17 00:26:03.835786 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 17 00:26:03.835791 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 17 00:26:03.835797 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 17 00:26:03.835808 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:26:03.835814 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:26:03.835820 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:26:03.835827 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 17 00:26:03.835832 kernel: RETBleed: Mitigation: untrained return thunk May 17 00:26:03.835838 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:26:03.835844 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 17 00:26:03.835849 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:26:03.835855 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:26:03.835862 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:26:03.835867 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:26:03.835873 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 17 00:26:03.835879 kernel: Freeing SMP alternatives memory: 32K May 17 00:26:03.835885 kernel: pid_max: default: 32768 minimum: 301 May 17 00:26:03.835890 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:26:03.835896 kernel: landlock: Up and running. May 17 00:26:03.835902 kernel: SELinux: Initializing. May 17 00:26:03.835908 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:26:03.835914 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:26:03.835919 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) May 17 00:26:03.835935 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:26:03.835941 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:26:03.835946 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:26:03.835952 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 17 00:26:03.835958 kernel: ... version: 0 May 17 00:26:03.835965 kernel: ... bit width: 48 May 17 00:26:03.835970 kernel: ... generic registers: 6 May 17 00:26:03.835976 kernel: ... value mask: 0000ffffffffffff May 17 00:26:03.835981 kernel: ... max period: 00007fffffffffff May 17 00:26:03.835987 kernel: ... fixed-purpose events: 0 May 17 00:26:03.835992 kernel: ... event mask: 000000000000003f May 17 00:26:03.835998 kernel: signal: max sigframe size: 1776 May 17 00:26:03.836003 kernel: rcu: Hierarchical SRCU implementation. May 17 00:26:03.836009 kernel: rcu: Max phase no-delay instances is 400. May 17 00:26:03.836016 kernel: smp: Bringing up secondary CPUs ... May 17 00:26:03.836021 kernel: smpboot: x86: Booting SMP configuration: May 17 00:26:03.836027 kernel: .... node #0, CPUs: #1 May 17 00:26:03.836032 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:26:03.836038 kernel: smpboot: Max logical packages: 1 May 17 00:26:03.836043 kernel: smpboot: Total of 2 processors activated (9781.62 BogoMIPS) May 17 00:26:03.836049 kernel: devtmpfs: initialized May 17 00:26:03.836054 kernel: x86/mm: Memory block size: 128MB May 17 00:26:03.836060 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:26:03.836066 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:26:03.836073 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:26:03.836078 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:26:03.836084 kernel: audit: initializing netlink subsys (disabled) May 17 00:26:03.836089 kernel: audit: type=2000 audit(1747441563.306:1): state=initialized audit_enabled=0 res=1 May 17 00:26:03.836095 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:26:03.836100 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:26:03.836106 kernel: cpuidle: using governor menu May 17 00:26:03.836111 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:26:03.836117 kernel: dca service started, version 1.12.1 May 17 00:26:03.836124 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 17 00:26:03.836129 kernel: PCI: Using configuration type 1 for base access May 17 00:26:03.836135 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:26:03.836141 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:26:03.836146 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:26:03.836152 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:26:03.836157 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:26:03.836163 kernel: ACPI: Added _OSI(Module Device) May 17 00:26:03.836168 kernel: ACPI: Added _OSI(Processor Device) May 17 00:26:03.836175 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:26:03.836181 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:26:03.836186 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:26:03.836192 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 17 00:26:03.836197 kernel: ACPI: Interpreter enabled May 17 00:26:03.836203 kernel: ACPI: PM: (supports S0 S5) May 17 00:26:03.836208 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:26:03.836214 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:26:03.836220 kernel: PCI: Using E820 reservations for host bridge windows May 17 00:26:03.836227 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 17 00:26:03.836232 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:26:03.836345 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:26:03.836416 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 17 00:26:03.836479 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 17 00:26:03.836487 kernel: PCI host bridge to bus 0000:00 May 17 00:26:03.836585 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:26:03.836667 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:26:03.836727 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:26:03.838849 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] May 17 00:26:03.838917 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 17 00:26:03.839003 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 17 00:26:03.839060 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:26:03.839184 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 17 00:26:03.839266 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 May 17 00:26:03.839332 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] May 17 00:26:03.839393 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] May 17 00:26:03.839453 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] May 17 00:26:03.839515 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] May 17 00:26:03.839576 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:26:03.839650 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 May 17 00:26:03.839712 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] May 17 00:26:03.839796 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 May 17 00:26:03.839863 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] May 17 00:26:03.839941 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 May 17 00:26:03.840006 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] May 17 00:26:03.840078 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 May 17 00:26:03.840148 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] May 17 00:26:03.840215 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 May 17 00:26:03.840276 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] May 17 00:26:03.840344 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 May 17 00:26:03.840404 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] May 17 00:26:03.841013 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 May 17 00:26:03.841085 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] May 17 00:26:03.841170 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 May 17 00:26:03.841234 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] May 17 00:26:03.841301 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 May 17 00:26:03.841362 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] May 17 00:26:03.841432 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 17 00:26:03.841493 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 17 00:26:03.841559 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 17 00:26:03.841620 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] May 17 00:26:03.841680 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] May 17 00:26:03.841746 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 17 00:26:03.841828 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 17 00:26:03.841907 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 May 17 00:26:03.841985 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] May 17 00:26:03.842049 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] May 17 00:26:03.842112 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] May 17 00:26:03.842498 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 17 00:26:03.842588 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] May 17 00:26:03.842652 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] May 17 00:26:03.842728 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 May 17 00:26:03.844845 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] May 17 00:26:03.844920 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 17 00:26:03.845000 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] May 17 00:26:03.845062 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] May 17 00:26:03.845133 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 May 17 00:26:03.845203 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] May 17 00:26:03.845267 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] May 17 00:26:03.845328 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 17 00:26:03.845388 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] May 17 00:26:03.845497 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] May 17 00:26:03.845570 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 May 17 00:26:03.845633 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] May 17 00:26:03.845697 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 17 00:26:03.845757 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] May 17 00:26:03.845848 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] May 17 00:26:03.845919 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 May 17 00:26:03.845997 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] May 17 00:26:03.846059 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] May 17 00:26:03.846119 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 17 00:26:03.846178 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] May 17 00:26:03.846242 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] May 17 00:26:03.846309 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 May 17 00:26:03.846502 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] May 17 00:26:03.846571 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] May 17 00:26:03.846654 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 17 00:26:03.846760 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] May 17 00:26:03.847663 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] May 17 00:26:03.847677 kernel: acpiphp: Slot [0] registered May 17 00:26:03.847751 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 May 17 00:26:03.847864 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] May 17 00:26:03.847942 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] May 17 00:26:03.848007 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] May 17 00:26:03.848067 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 17 00:26:03.848127 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] May 17 00:26:03.848186 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] May 17 00:26:03.848197 kernel: acpiphp: Slot [0-2] registered May 17 00:26:03.848258 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 17 00:26:03.848317 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] May 17 00:26:03.848378 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] May 17 00:26:03.848386 kernel: acpiphp: Slot [0-3] registered May 17 00:26:03.848451 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 17 00:26:03.848510 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] May 17 00:26:03.848570 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] May 17 00:26:03.848581 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:26:03.848587 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:26:03.848593 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:26:03.848598 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:26:03.848604 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 17 00:26:03.848609 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 17 00:26:03.848615 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 17 00:26:03.848621 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 17 00:26:03.848626 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 17 00:26:03.848634 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 17 00:26:03.848639 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 17 00:26:03.848645 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 17 00:26:03.848650 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 17 00:26:03.848656 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 17 00:26:03.848662 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 17 00:26:03.848667 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 17 00:26:03.848673 kernel: iommu: Default domain type: Translated May 17 00:26:03.848678 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:26:03.848685 kernel: PCI: Using ACPI for IRQ routing May 17 00:26:03.848691 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:26:03.848696 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 17 00:26:03.848702 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] May 17 00:26:03.848762 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 17 00:26:03.849883 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 17 00:26:03.849960 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:26:03.849970 kernel: vgaarb: loaded May 17 00:26:03.849976 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 17 00:26:03.849986 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 17 00:26:03.849992 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:26:03.849998 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:26:03.850004 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:26:03.850009 kernel: pnp: PnP ACPI init May 17 00:26:03.850077 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 17 00:26:03.850087 kernel: pnp: PnP ACPI: found 5 devices May 17 00:26:03.850093 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:26:03.850101 kernel: NET: Registered PF_INET protocol family May 17 00:26:03.850107 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:26:03.850112 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 17 00:26:03.850118 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:26:03.850123 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:26:03.850129 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 17 00:26:03.850135 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 17 00:26:03.850140 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:26:03.850146 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:26:03.850154 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:26:03.850159 kernel: NET: Registered PF_XDP protocol family May 17 00:26:03.850226 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 17 00:26:03.850288 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 17 00:26:03.850348 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 17 00:26:03.850407 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] May 17 00:26:03.850468 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] May 17 00:26:03.850533 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] May 17 00:26:03.850592 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 17 00:26:03.850652 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] May 17 00:26:03.850720 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] May 17 00:26:03.853821 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 17 00:26:03.853901 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] May 17 00:26:03.853981 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] May 17 00:26:03.854045 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 17 00:26:03.854112 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] May 17 00:26:03.854172 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] May 17 00:26:03.854235 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 17 00:26:03.854297 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] May 17 00:26:03.854357 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] May 17 00:26:03.854419 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 17 00:26:03.854480 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] May 17 00:26:03.854545 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] May 17 00:26:03.854618 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 17 00:26:03.854679 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] May 17 00:26:03.854740 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] May 17 00:26:03.855844 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 17 00:26:03.855916 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] May 17 00:26:03.855992 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] May 17 00:26:03.856053 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] May 17 00:26:03.856113 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 17 00:26:03.856173 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] May 17 00:26:03.856238 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] May 17 00:26:03.856298 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] May 17 00:26:03.856357 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 17 00:26:03.856417 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] May 17 00:26:03.856477 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] May 17 00:26:03.856543 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] May 17 00:26:03.856600 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:26:03.856653 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:26:03.856705 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:26:03.856757 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] May 17 00:26:03.857902 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 17 00:26:03.857974 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 17 00:26:03.858041 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] May 17 00:26:03.858099 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] May 17 00:26:03.858161 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] May 17 00:26:03.858217 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] May 17 00:26:03.858282 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] May 17 00:26:03.858343 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] May 17 00:26:03.858404 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] May 17 00:26:03.858460 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] May 17 00:26:03.858521 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] May 17 00:26:03.858578 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] May 17 00:26:03.858640 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] May 17 00:26:03.858701 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] May 17 00:26:03.858761 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] May 17 00:26:03.858853 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] May 17 00:26:03.858911 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] May 17 00:26:03.858987 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] May 17 00:26:03.859045 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] May 17 00:26:03.859106 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] May 17 00:26:03.859167 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] May 17 00:26:03.859224 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] May 17 00:26:03.859280 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] May 17 00:26:03.859289 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 17 00:26:03.859296 kernel: PCI: CLS 0 bytes, default 64 May 17 00:26:03.859303 kernel: Initialise system trusted keyrings May 17 00:26:03.859309 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 17 00:26:03.859317 kernel: Key type asymmetric registered May 17 00:26:03.859324 kernel: Asymmetric key parser 'x509' registered May 17 00:26:03.859330 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 17 00:26:03.859336 kernel: io scheduler mq-deadline registered May 17 00:26:03.859343 kernel: io scheduler kyber registered May 17 00:26:03.859349 kernel: io scheduler bfq registered May 17 00:26:03.859413 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 May 17 00:26:03.859543 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 May 17 00:26:03.859628 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 May 17 00:26:03.859697 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 May 17 00:26:03.861802 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 May 17 00:26:03.861902 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 May 17 00:26:03.861993 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 May 17 00:26:03.862070 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 May 17 00:26:03.862132 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 May 17 00:26:03.862192 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 May 17 00:26:03.862281 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 May 17 00:26:03.862354 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 May 17 00:26:03.862417 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 May 17 00:26:03.862479 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 May 17 00:26:03.862538 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 May 17 00:26:03.862597 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 May 17 00:26:03.862607 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 17 00:26:03.862665 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 May 17 00:26:03.862724 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 May 17 00:26:03.862732 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:26:03.862741 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 May 17 00:26:03.862747 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:26:03.862753 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:26:03.862759 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:26:03.862780 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:26:03.862787 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:26:03.862856 kernel: rtc_cmos 00:03: RTC can wake from S4 May 17 00:26:03.862916 kernel: rtc_cmos 00:03: registered as rtc0 May 17 00:26:03.862989 kernel: rtc_cmos 00:03: setting system clock to 2025-05-17T00:26:03 UTC (1747441563) May 17 00:26:03.863046 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 17 00:26:03.863055 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 17 00:26:03.863061 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:26:03.863067 kernel: NET: Registered PF_INET6 protocol family May 17 00:26:03.863073 kernel: Segment Routing with IPv6 May 17 00:26:03.863080 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:26:03.863086 kernel: NET: Registered PF_PACKET protocol family May 17 00:26:03.863094 kernel: Key type dns_resolver registered May 17 00:26:03.863100 kernel: IPI shorthand broadcast: enabled May 17 00:26:03.863106 kernel: sched_clock: Marking stable (1031006111, 134869590)->(1171352646, -5476945) May 17 00:26:03.863112 kernel: registered taskstats version 1 May 17 00:26:03.863118 kernel: Loading compiled-in X.509 certificates May 17 00:26:03.863124 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 85b8d1234ceca483cb3defc2030d93f7792663c9' May 17 00:26:03.863130 kernel: Key type .fscrypt registered May 17 00:26:03.863136 kernel: Key type fscrypt-provisioning registered May 17 00:26:03.863142 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:26:03.863150 kernel: ima: Allocated hash algorithm: sha1 May 17 00:26:03.863156 kernel: ima: No architecture policies found May 17 00:26:03.863162 kernel: clk: Disabling unused clocks May 17 00:26:03.863168 kernel: Freeing unused kernel image (initmem) memory: 42872K May 17 00:26:03.863174 kernel: Write protecting the kernel read-only data: 36864k May 17 00:26:03.863180 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 17 00:26:03.863186 kernel: Run /init as init process May 17 00:26:03.863192 kernel: with arguments: May 17 00:26:03.863198 kernel: /init May 17 00:26:03.863205 kernel: with environment: May 17 00:26:03.863211 kernel: HOME=/ May 17 00:26:03.863217 kernel: TERM=linux May 17 00:26:03.863223 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:26:03.863231 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:26:03.863239 systemd[1]: Detected virtualization kvm. May 17 00:26:03.863245 systemd[1]: Detected architecture x86-64. May 17 00:26:03.863252 systemd[1]: Running in initrd. May 17 00:26:03.863259 systemd[1]: No hostname configured, using default hostname. May 17 00:26:03.863265 systemd[1]: Hostname set to . May 17 00:26:03.863272 systemd[1]: Initializing machine ID from VM UUID. May 17 00:26:03.863278 systemd[1]: Queued start job for default target initrd.target. May 17 00:26:03.863284 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:26:03.863291 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:26:03.863298 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:26:03.863304 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:26:03.863312 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:26:03.863319 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:26:03.863326 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:26:03.863332 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:26:03.863339 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:26:03.863345 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:26:03.863353 systemd[1]: Reached target paths.target - Path Units. May 17 00:26:03.863359 systemd[1]: Reached target slices.target - Slice Units. May 17 00:26:03.863365 systemd[1]: Reached target swap.target - Swaps. May 17 00:26:03.863371 systemd[1]: Reached target timers.target - Timer Units. May 17 00:26:03.863378 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:26:03.863384 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:26:03.863391 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:26:03.863397 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:26:03.863403 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:26:03.863411 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:26:03.863417 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:26:03.863423 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:26:03.863430 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:26:03.863436 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:26:03.863442 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:26:03.863449 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:26:03.863455 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:26:03.863461 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:26:03.863469 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:26:03.863488 systemd-journald[187]: Collecting audit messages is disabled. May 17 00:26:03.863505 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:26:03.863512 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:26:03.863520 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:26:03.863527 systemd-journald[187]: Journal started May 17 00:26:03.863544 systemd-journald[187]: Runtime Journal (/run/log/journal/bc534e52f2d94b429a7d2f053204bf01) is 4.8M, max 38.4M, 33.6M free. May 17 00:26:03.867739 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:26:03.867807 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:26:03.867828 kernel: Bridge firewalling registered May 17 00:26:03.846630 systemd-modules-load[188]: Inserted module 'overlay' May 17 00:26:03.867244 systemd-modules-load[188]: Inserted module 'br_netfilter' May 17 00:26:03.910785 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:26:03.910828 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:26:03.912113 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:26:03.913321 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:26:03.918910 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:26:03.922871 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:26:03.924307 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:26:03.926886 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:26:03.934398 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:26:03.937087 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:26:03.944910 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:26:03.946255 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:26:03.946822 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:26:03.952017 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:26:03.955493 dracut-cmdline[218]: dracut-dracut-053 May 17 00:26:03.957890 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:26:03.989091 systemd-resolved[225]: Positive Trust Anchors: May 17 00:26:03.989107 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:26:03.989142 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:26:03.997779 systemd-resolved[225]: Defaulting to hostname 'linux'. May 17 00:26:03.998551 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:26:03.999265 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:26:04.007898 kernel: SCSI subsystem initialized May 17 00:26:04.015796 kernel: Loading iSCSI transport class v2.0-870. May 17 00:26:04.024805 kernel: iscsi: registered transport (tcp) May 17 00:26:04.040017 kernel: iscsi: registered transport (qla4xxx) May 17 00:26:04.040045 kernel: QLogic iSCSI HBA Driver May 17 00:26:04.070649 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:26:04.074899 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:26:04.094685 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:26:04.094724 kernel: device-mapper: uevent: version 1.0.3 May 17 00:26:04.096315 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:26:04.133818 kernel: raid6: avx2x4 gen() 31297 MB/s May 17 00:26:04.148799 kernel: raid6: avx2x2 gen() 29087 MB/s May 17 00:26:04.165933 kernel: raid6: avx2x1 gen() 24385 MB/s May 17 00:26:04.165972 kernel: raid6: using algorithm avx2x4 gen() 31297 MB/s May 17 00:26:04.184037 kernel: raid6: .... xor() 4718 MB/s, rmw enabled May 17 00:26:04.184080 kernel: raid6: using avx2x2 recovery algorithm May 17 00:26:04.200796 kernel: xor: automatically using best checksumming function avx May 17 00:26:04.332815 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:26:04.341696 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:26:04.350902 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:26:04.362090 systemd-udevd[405]: Using default interface naming scheme 'v255'. May 17 00:26:04.365298 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:26:04.372882 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:26:04.382600 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation May 17 00:26:04.402549 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:26:04.412912 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:26:04.446293 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:26:04.452900 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:26:04.465708 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:26:04.467220 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:26:04.468513 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:26:04.469743 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:26:04.476043 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:26:04.485805 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:26:04.513801 kernel: scsi host0: Virtio SCSI HBA May 17 00:26:04.516816 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:26:04.523484 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 17 00:26:04.533102 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:26:04.533147 kernel: AES CTR mode by8 optimization enabled May 17 00:26:04.533724 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:26:04.535304 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:26:04.538089 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:26:04.557485 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:26:04.557912 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:26:04.561609 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:26:04.570098 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:26:04.572044 kernel: ACPI: bus type USB registered May 17 00:26:04.572063 kernel: usbcore: registered new interface driver usbfs May 17 00:26:04.572071 kernel: usbcore: registered new interface driver hub May 17 00:26:04.572079 kernel: usbcore: registered new device driver usb May 17 00:26:04.592862 kernel: libata version 3.00 loaded. May 17 00:26:04.611806 kernel: ahci 0000:00:1f.2: version 3.0 May 17 00:26:04.612009 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 17 00:26:04.615816 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 17 00:26:04.615943 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 17 00:26:04.617785 kernel: scsi host1: ahci May 17 00:26:04.624819 kernel: scsi host2: ahci May 17 00:26:04.627801 kernel: scsi host3: ahci May 17 00:26:04.627911 kernel: scsi host4: ahci May 17 00:26:04.628010 kernel: scsi host5: ahci May 17 00:26:04.628088 kernel: scsi host6: ahci May 17 00:26:04.628160 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 May 17 00:26:04.628168 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 May 17 00:26:04.628176 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 May 17 00:26:04.628183 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 May 17 00:26:04.628190 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 May 17 00:26:04.628196 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 May 17 00:26:04.635813 kernel: sd 0:0:0:0: Power-on or device reset occurred May 17 00:26:04.635995 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) May 17 00:26:04.636090 kernel: sd 0:0:0:0: [sda] Write Protect is off May 17 00:26:04.636170 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 May 17 00:26:04.636246 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 00:26:04.637793 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:26:04.637814 kernel: GPT:17805311 != 80003071 May 17 00:26:04.637823 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:26:04.637837 kernel: GPT:17805311 != 80003071 May 17 00:26:04.637844 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:26:04.637851 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:26:04.637859 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 17 00:26:04.672836 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:26:04.678019 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:26:04.691857 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:26:04.940178 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 17 00:26:04.940256 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 17 00:26:04.940265 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 17 00:26:04.940273 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 17 00:26:04.940281 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 17 00:26:04.940300 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 17 00:26:04.944251 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 17 00:26:04.944311 kernel: ata1.00: applying bridge limits May 17 00:26:04.944322 kernel: ata1.00: configured for UDMA/100 May 17 00:26:04.946011 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 17 00:26:04.966832 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 17 00:26:04.967002 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 May 17 00:26:04.969131 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 May 17 00:26:04.972291 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 17 00:26:04.972411 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 May 17 00:26:04.973722 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed May 17 00:26:04.975807 kernel: hub 1-0:1.0: USB hub found May 17 00:26:04.975940 kernel: hub 1-0:1.0: 4 ports detected May 17 00:26:04.977823 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 17 00:26:04.981130 kernel: hub 2-0:1.0: USB hub found May 17 00:26:04.981251 kernel: hub 2-0:1.0: 4 ports detected May 17 00:26:04.996061 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 17 00:26:04.996210 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 00:26:05.000621 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 17 00:26:05.004796 kernel: BTRFS: device fsid 7f88d479-6686-439c-8052-b96f0a9d77bc devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (451) May 17 00:26:05.009781 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (462) May 17 00:26:05.013362 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 May 17 00:26:05.010348 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 17 00:26:05.017409 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 17 00:26:05.018565 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 17 00:26:05.023290 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 17 00:26:05.031956 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:26:05.035654 disk-uuid[572]: Primary Header is updated. May 17 00:26:05.035654 disk-uuid[572]: Secondary Entries is updated. May 17 00:26:05.035654 disk-uuid[572]: Secondary Header is updated. May 17 00:26:05.040836 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:26:05.046795 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:26:05.051799 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:26:05.219790 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd May 17 00:26:05.361824 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 00:26:05.370844 kernel: usbcore: registered new interface driver usbhid May 17 00:26:05.370906 kernel: usbhid: USB HID core driver May 17 00:26:05.380887 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 May 17 00:26:05.380951 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 May 17 00:26:06.055875 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:26:06.057649 disk-uuid[573]: The operation has completed successfully. May 17 00:26:06.120573 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:26:06.120689 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:26:06.132861 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:26:06.136159 sh[595]: Success May 17 00:26:06.146837 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 17 00:26:06.196231 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:26:06.199228 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:26:06.200792 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:26:06.220459 kernel: BTRFS info (device dm-0): first mount of filesystem 7f88d479-6686-439c-8052-b96f0a9d77bc May 17 00:26:06.220498 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 17 00:26:06.222875 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:26:06.226813 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:26:06.226846 kernel: BTRFS info (device dm-0): using free space tree May 17 00:26:06.238814 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 00:26:06.241675 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:26:06.244406 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:26:06.253947 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:26:06.256300 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:26:06.275281 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:26:06.275328 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:26:06.275342 kernel: BTRFS info (device sda6): using free space tree May 17 00:26:06.282961 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:26:06.283008 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:26:06.292599 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:26:06.295788 kernel: BTRFS info (device sda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:26:06.298927 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:26:06.304280 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:26:06.340656 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:26:06.351045 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:26:06.369105 ignition[714]: Ignition 2.19.0 May 17 00:26:06.369655 ignition[714]: Stage: fetch-offline May 17 00:26:06.369154 systemd-networkd[776]: lo: Link UP May 17 00:26:06.369690 ignition[714]: no configs at "/usr/lib/ignition/base.d" May 17 00:26:06.369158 systemd-networkd[776]: lo: Gained carrier May 17 00:26:06.369698 ignition[714]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:26:06.370656 systemd-networkd[776]: Enumeration completed May 17 00:26:06.370440 ignition[714]: parsed url from cmdline: "" May 17 00:26:06.370728 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:26:06.370841 ignition[714]: no config URL provided May 17 00:26:06.371605 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:26:06.370848 ignition[714]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:26:06.371608 systemd-networkd[776]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:26:06.370856 ignition[714]: no config at "/usr/lib/ignition/user.ign" May 17 00:26:06.372460 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:26:06.370860 ignition[714]: failed to fetch config: resource requires networking May 17 00:26:06.372806 systemd-networkd[776]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:26:06.371013 ignition[714]: Ignition finished successfully May 17 00:26:06.372808 systemd-networkd[776]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:26:06.373788 systemd[1]: Reached target network.target - Network. May 17 00:26:06.373838 systemd-networkd[776]: eth0: Link UP May 17 00:26:06.373840 systemd-networkd[776]: eth0: Gained carrier May 17 00:26:06.374031 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:26:06.378476 systemd-networkd[776]: eth1: Link UP May 17 00:26:06.378479 systemd-networkd[776]: eth1: Gained carrier May 17 00:26:06.378484 systemd-networkd[776]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:26:06.388141 ignition[784]: Ignition 2.19.0 May 17 00:26:06.378884 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 17 00:26:06.388146 ignition[784]: Stage: fetch May 17 00:26:06.388294 ignition[784]: no configs at "/usr/lib/ignition/base.d" May 17 00:26:06.388302 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:26:06.388369 ignition[784]: parsed url from cmdline: "" May 17 00:26:06.388372 ignition[784]: no config URL provided May 17 00:26:06.388377 ignition[784]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:26:06.388383 ignition[784]: no config at "/usr/lib/ignition/user.ign" May 17 00:26:06.388397 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 May 17 00:26:06.388629 ignition[784]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable May 17 00:26:06.407813 systemd-networkd[776]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:26:06.438804 systemd-networkd[776]: eth0: DHCPv4 address 135.181.90.190/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 17 00:26:06.589272 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 May 17 00:26:06.595250 ignition[784]: GET result: OK May 17 00:26:06.595347 ignition[784]: parsing config with SHA512: 5da5df7cd8431fe84bbdc0ed31f019ece3862b51bc70762c69bd02aed09af40a763d0462c50a0fddf62d834bc28e8ff86a76b59134d89a375247094631a19817 May 17 00:26:06.599614 unknown[784]: fetched base config from "system" May 17 00:26:06.599626 unknown[784]: fetched base config from "system" May 17 00:26:06.600096 ignition[784]: fetch: fetch complete May 17 00:26:06.599632 unknown[784]: fetched user config from "hetzner" May 17 00:26:06.600102 ignition[784]: fetch: fetch passed May 17 00:26:06.600148 ignition[784]: Ignition finished successfully May 17 00:26:06.603141 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 17 00:26:06.607880 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:26:06.619428 ignition[791]: Ignition 2.19.0 May 17 00:26:06.619440 ignition[791]: Stage: kargs May 17 00:26:06.619589 ignition[791]: no configs at "/usr/lib/ignition/base.d" May 17 00:26:06.619599 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:26:06.622348 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:26:06.620490 ignition[791]: kargs: kargs passed May 17 00:26:06.620526 ignition[791]: Ignition finished successfully May 17 00:26:06.628005 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:26:06.641275 ignition[798]: Ignition 2.19.0 May 17 00:26:06.641285 ignition[798]: Stage: disks May 17 00:26:06.641445 ignition[798]: no configs at "/usr/lib/ignition/base.d" May 17 00:26:06.641453 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:26:06.643735 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:26:06.642623 ignition[798]: disks: disks passed May 17 00:26:06.644911 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:26:06.642654 ignition[798]: Ignition finished successfully May 17 00:26:06.645803 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:26:06.646746 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:26:06.647918 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:26:06.648920 systemd[1]: Reached target basic.target - Basic System. May 17 00:26:06.658894 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:26:06.670889 systemd-fsck[806]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 17 00:26:06.672211 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:26:06.677859 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:26:06.744651 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:26:06.745709 kernel: EXT4-fs (sda9): mounted filesystem 278698a4-82b6-49b4-b6df-f7999ed4e35e r/w with ordered data mode. Quota mode: none. May 17 00:26:06.745460 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:26:06.755825 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:26:06.757942 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:26:06.760985 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 17 00:26:06.762522 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:26:06.762548 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:26:06.764790 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:26:06.773683 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (814) May 17 00:26:06.773701 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:26:06.767889 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:26:06.777618 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:26:06.777641 kernel: BTRFS info (device sda6): using free space tree May 17 00:26:06.783329 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:26:06.783351 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:26:06.787283 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:26:06.809993 coreos-metadata[816]: May 17 00:26:06.809 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 May 17 00:26:06.810923 coreos-metadata[816]: May 17 00:26:06.810 INFO Fetch successful May 17 00:26:06.812316 coreos-metadata[816]: May 17 00:26:06.811 INFO wrote hostname ci-4081-3-3-n-556bea0d1e to /sysroot/etc/hostname May 17 00:26:06.812598 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:26:06.814306 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:26:06.817616 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory May 17 00:26:06.821250 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:26:06.824131 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:26:06.879909 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:26:06.888845 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:26:06.891441 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:26:06.896799 kernel: BTRFS info (device sda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:26:06.910560 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:26:06.911907 ignition[931]: INFO : Ignition 2.19.0 May 17 00:26:06.911907 ignition[931]: INFO : Stage: mount May 17 00:26:06.912849 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:26:06.912849 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:26:06.914048 ignition[931]: INFO : mount: mount passed May 17 00:26:06.914048 ignition[931]: INFO : Ignition finished successfully May 17 00:26:06.913497 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:26:06.919845 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:26:07.217631 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:26:07.225965 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:26:07.238813 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (944) May 17 00:26:07.245567 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:26:07.245613 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:26:07.247818 kernel: BTRFS info (device sda6): using free space tree May 17 00:26:07.257325 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:26:07.257372 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:26:07.263121 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:26:07.294228 ignition[960]: INFO : Ignition 2.19.0 May 17 00:26:07.295562 ignition[960]: INFO : Stage: files May 17 00:26:07.295562 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:26:07.295562 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:26:07.300208 ignition[960]: DEBUG : files: compiled without relabeling support, skipping May 17 00:26:07.300208 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:26:07.300208 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:26:07.305374 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:26:07.305374 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:26:07.305374 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:26:07.305374 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:26:07.305374 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:26:07.305374 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:26:07.305374 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 17 00:26:07.303276 unknown[960]: wrote ssh authorized keys file for user: core May 17 00:26:07.646730 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 00:26:07.983961 systemd-networkd[776]: eth1: Gained IPv6LL May 17 00:26:08.112086 systemd-networkd[776]: eth0: Gained IPv6LL May 17 00:26:09.520268 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:26:09.522740 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 17 00:26:09.522740 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:26:09.522740 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:26:09.522740 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:26:09.522740 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:26:09.522740 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:26:09.522740 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:26:09.522740 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:26:09.522740 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:26:09.522740 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:26:09.522740 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:26:09.547994 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:26:09.547994 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:26:09.547994 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 17 00:26:10.356530 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 17 00:26:10.493917 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:26:10.493917 ignition[960]: INFO : files: op(c): [started] processing unit "containerd.service" May 17 00:26:10.496957 ignition[960]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:26:10.496957 ignition[960]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:26:10.496957 ignition[960]: INFO : files: op(c): [finished] processing unit "containerd.service" May 17 00:26:10.496957 ignition[960]: INFO : files: op(e): [started] processing unit "prepare-helm.service" May 17 00:26:10.496957 ignition[960]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:26:10.496957 ignition[960]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:26:10.496957 ignition[960]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" May 17 00:26:10.496957 ignition[960]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" May 17 00:26:10.496957 ignition[960]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 17 00:26:10.496957 ignition[960]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 17 00:26:10.496957 ignition[960]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" May 17 00:26:10.496957 ignition[960]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 17 00:26:10.496957 ignition[960]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:26:10.496957 ignition[960]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:26:10.496957 ignition[960]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:26:10.496957 ignition[960]: INFO : files: files passed May 17 00:26:10.496957 ignition[960]: INFO : Ignition finished successfully May 17 00:26:10.497298 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:26:10.509925 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:26:10.512894 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:26:10.515088 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:26:10.515147 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:26:10.521334 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:26:10.521334 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:26:10.523578 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:26:10.524514 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:26:10.525212 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:26:10.530875 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:26:10.544155 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:26:10.544225 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:26:10.545377 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:26:10.546246 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:26:10.547398 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:26:10.555919 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:26:10.563264 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:26:10.565241 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:26:10.581919 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:26:10.582471 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:26:10.583557 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:26:10.584510 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:26:10.584590 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:26:10.585689 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:26:10.586351 systemd[1]: Stopped target basic.target - Basic System. May 17 00:26:10.587425 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:26:10.588377 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:26:10.589266 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:26:10.590270 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:26:10.591290 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:26:10.592330 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:26:10.593302 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:26:10.594320 systemd[1]: Stopped target swap.target - Swaps. May 17 00:26:10.595243 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:26:10.595322 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:26:10.596401 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:26:10.597047 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:26:10.597969 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:26:10.599836 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:26:10.600579 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:26:10.600663 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:26:10.601958 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:26:10.602087 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:26:10.602729 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:26:10.602859 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:26:10.603717 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:26:10.603860 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:26:10.615942 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:26:10.616374 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:26:10.616496 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:26:10.619880 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:26:10.620306 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:26:10.620430 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:26:10.620996 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:26:10.621075 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:26:10.625845 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:26:10.625916 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:26:10.634794 ignition[1014]: INFO : Ignition 2.19.0 May 17 00:26:10.634794 ignition[1014]: INFO : Stage: umount May 17 00:26:10.634794 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:26:10.634794 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:26:10.638036 ignition[1014]: INFO : umount: umount passed May 17 00:26:10.638036 ignition[1014]: INFO : Ignition finished successfully May 17 00:26:10.636320 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:26:10.638794 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:26:10.638864 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:26:10.639502 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:26:10.639561 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:26:10.640617 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:26:10.640668 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:26:10.641606 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:26:10.641638 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:26:10.642479 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:26:10.642510 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 17 00:26:10.643351 systemd[1]: Stopped target network.target - Network. May 17 00:26:10.644185 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:26:10.644220 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:26:10.645129 systemd[1]: Stopped target paths.target - Path Units. May 17 00:26:10.645980 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:26:10.650828 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:26:10.651336 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:26:10.652392 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:26:10.653262 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:26:10.653289 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:26:10.654121 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:26:10.654147 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:26:10.654974 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:26:10.655007 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:26:10.655854 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:26:10.655897 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:26:10.656727 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:26:10.656756 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:26:10.657709 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:26:10.658631 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:26:10.660801 systemd-networkd[776]: eth0: DHCPv6 lease lost May 17 00:26:10.664839 systemd-networkd[776]: eth1: DHCPv6 lease lost May 17 00:26:10.665093 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:26:10.665174 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:26:10.667235 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:26:10.667426 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:26:10.668620 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:26:10.668653 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:26:10.673896 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:26:10.674320 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:26:10.674354 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:26:10.674914 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:26:10.674945 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:26:10.675826 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:26:10.675855 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:26:10.676820 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:26:10.676849 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:26:10.678012 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:26:10.684200 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:26:10.684286 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:26:10.692214 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:26:10.692327 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:26:10.693383 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:26:10.693412 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:26:10.694271 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:26:10.694294 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:26:10.695230 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:26:10.695263 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:26:10.696720 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:26:10.696752 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:26:10.697778 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:26:10.697810 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:26:10.707896 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:26:10.708345 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:26:10.708382 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:26:10.708884 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:26:10.708915 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:26:10.711173 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:26:10.711237 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:26:10.712119 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:26:10.714900 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:26:10.721903 systemd[1]: Switching root. May 17 00:26:10.754832 systemd-journald[187]: Journal stopped May 17 00:26:11.552328 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). May 17 00:26:11.552386 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:26:11.552398 kernel: SELinux: policy capability open_perms=1 May 17 00:26:11.552405 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:26:11.552414 kernel: SELinux: policy capability always_check_network=0 May 17 00:26:11.552422 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:26:11.552435 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:26:11.552443 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:26:11.552450 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:26:11.552458 kernel: audit: type=1403 audit(1747441570.940:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:26:11.552466 systemd[1]: Successfully loaded SELinux policy in 39.769ms. May 17 00:26:11.552476 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.778ms. May 17 00:26:11.552487 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:26:11.552495 systemd[1]: Detected virtualization kvm. May 17 00:26:11.552503 systemd[1]: Detected architecture x86-64. May 17 00:26:11.552511 systemd[1]: Detected first boot. May 17 00:26:11.552520 systemd[1]: Hostname set to . May 17 00:26:11.552528 systemd[1]: Initializing machine ID from VM UUID. May 17 00:26:11.552536 zram_generator::config[1077]: No configuration found. May 17 00:26:11.552547 systemd[1]: Populated /etc with preset unit settings. May 17 00:26:11.552557 systemd[1]: Queued start job for default target multi-user.target. May 17 00:26:11.552565 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 17 00:26:11.552576 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 00:26:11.552584 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 00:26:11.552591 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 00:26:11.552599 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 00:26:11.552607 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 00:26:11.552615 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 00:26:11.552623 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 00:26:11.552632 systemd[1]: Created slice user.slice - User and Session Slice. May 17 00:26:11.552641 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:26:11.552649 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:26:11.552657 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 00:26:11.552665 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 00:26:11.552674 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 00:26:11.552682 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:26:11.552691 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 17 00:26:11.552701 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:26:11.552709 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 00:26:11.552717 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:26:11.552726 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:26:11.552733 systemd[1]: Reached target slices.target - Slice Units. May 17 00:26:11.552741 systemd[1]: Reached target swap.target - Swaps. May 17 00:26:11.552749 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 00:26:11.552759 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 00:26:11.552787 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:26:11.552797 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:26:11.552805 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:26:11.552814 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:26:11.552822 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:26:11.552830 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 00:26:11.552838 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 00:26:11.552846 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 00:26:11.552858 systemd[1]: Mounting media.mount - External Media Directory... May 17 00:26:11.552879 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:26:11.552888 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 00:26:11.552896 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 00:26:11.552904 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 00:26:11.552912 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 00:26:11.552922 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:26:11.552930 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:26:11.552939 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 00:26:11.552947 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:26:11.552955 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:26:11.552963 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:26:11.552970 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 00:26:11.552978 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:26:11.552988 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:26:11.552997 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 17 00:26:11.553010 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) May 17 00:26:11.553018 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:26:11.553026 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:26:11.553034 kernel: fuse: init (API version 7.39) May 17 00:26:11.553042 kernel: loop: module loaded May 17 00:26:11.553050 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 00:26:11.553071 systemd-journald[1178]: Collecting audit messages is disabled. May 17 00:26:11.553094 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 00:26:11.553103 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:26:11.553112 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:26:11.553120 systemd-journald[1178]: Journal started May 17 00:26:11.553138 systemd-journald[1178]: Runtime Journal (/run/log/journal/bc534e52f2d94b429a7d2f053204bf01) is 4.8M, max 38.4M, 33.6M free. May 17 00:26:11.569825 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:26:11.565000 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 00:26:11.565558 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 00:26:11.566082 systemd[1]: Mounted media.mount - External Media Directory. May 17 00:26:11.576539 kernel: ACPI: bus type drm_connector registered May 17 00:26:11.572472 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 00:26:11.572993 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 00:26:11.573720 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 00:26:11.574407 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 00:26:11.575321 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:26:11.575995 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:26:11.576106 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 00:26:11.577159 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:26:11.577269 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:26:11.578075 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:26:11.578234 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:26:11.578951 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:26:11.579059 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:26:11.579754 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:26:11.579958 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 00:26:11.580597 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:26:11.580878 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:26:11.581565 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:26:11.582396 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 00:26:11.583164 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 00:26:11.590451 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 00:26:11.594873 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 00:26:11.596917 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 00:26:11.598436 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:26:11.604899 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 00:26:11.607014 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 00:26:11.612842 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:26:11.613641 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 00:26:11.615015 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:26:11.621855 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:26:11.633442 systemd-journald[1178]: Time spent on flushing to /var/log/journal/bc534e52f2d94b429a7d2f053204bf01 is 22.807ms for 1117 entries. May 17 00:26:11.633442 systemd-journald[1178]: System Journal (/var/log/journal/bc534e52f2d94b429a7d2f053204bf01) is 8.0M, max 584.8M, 576.8M free. May 17 00:26:11.677428 systemd-journald[1178]: Received client request to flush runtime journal. May 17 00:26:11.626887 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:26:11.631985 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 00:26:11.634870 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 00:26:11.642657 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:26:11.650890 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 00:26:11.651853 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 00:26:11.653011 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 00:26:11.665377 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:26:11.665874 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. May 17 00:26:11.665884 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. May 17 00:26:11.668601 udevadm[1223]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 17 00:26:11.671704 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:26:11.679814 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 00:26:11.682654 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 00:26:11.700050 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 00:26:11.708906 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:26:11.718104 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. May 17 00:26:11.718117 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. May 17 00:26:11.721193 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:26:12.020058 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 00:26:12.026982 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:26:12.041719 systemd-udevd[1245]: Using default interface naming scheme 'v255'. May 17 00:26:12.061010 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:26:12.070932 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:26:12.081917 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 00:26:12.111019 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. May 17 00:26:12.114334 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 00:26:12.165353 systemd-networkd[1251]: lo: Link UP May 17 00:26:12.165597 systemd-networkd[1251]: lo: Gained carrier May 17 00:26:12.167157 systemd-networkd[1251]: Enumeration completed May 17 00:26:12.167406 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:26:12.167630 systemd-networkd[1251]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:26:12.167633 systemd-networkd[1251]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:26:12.168308 systemd-networkd[1251]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:26:12.168313 systemd-networkd[1251]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:26:12.168895 systemd-networkd[1251]: eth0: Link UP May 17 00:26:12.168898 systemd-networkd[1251]: eth0: Gained carrier May 17 00:26:12.168907 systemd-networkd[1251]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:26:12.173864 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 00:26:12.175997 systemd-networkd[1251]: eth1: Link UP May 17 00:26:12.176000 systemd-networkd[1251]: eth1: Gained carrier May 17 00:26:12.176011 systemd-networkd[1251]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:26:12.187830 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 May 17 00:26:12.194800 kernel: ACPI: button: Power Button [PWRF] May 17 00:26:12.198118 systemd-networkd[1251]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:26:12.198620 systemd-networkd[1251]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:26:12.200285 systemd-networkd[1251]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:26:12.212880 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. May 17 00:26:12.212898 systemd[1]: Condition check resulted in dev-vport2p1.device - /dev/vport2p1 being skipped. May 17 00:26:12.212937 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:26:12.213033 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:26:12.213784 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:26:12.223581 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1261) May 17 00:26:12.222900 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:26:12.224507 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:26:12.229898 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:26:12.232461 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:26:12.232504 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:26:12.232554 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:26:12.232945 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:26:12.233053 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:26:12.233836 systemd-networkd[1251]: eth0: DHCPv4 address 135.181.90.190/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 17 00:26:12.241026 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:26:12.241138 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:26:12.255318 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:26:12.255519 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:26:12.274800 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 May 17 00:26:12.282408 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:26:12.282518 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:26:12.296246 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 17 00:26:12.296416 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 17 00:26:12.296516 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 17 00:26:12.305001 kernel: EDAC MC: Ver: 3.0.0 May 17 00:26:12.307000 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:26:12.314784 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 May 17 00:26:12.316056 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 17 00:26:12.318829 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console May 17 00:26:12.319799 kernel: Console: switching to colour dummy device 80x25 May 17 00:26:12.322249 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 17 00:26:12.322273 kernel: [drm] features: -context_init May 17 00:26:12.322299 kernel: [drm] number of scanouts: 1 May 17 00:26:12.322308 kernel: [drm] number of cap sets: 0 May 17 00:26:12.324795 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 May 17 00:26:12.330890 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 17 00:26:12.330920 kernel: Console: switching to colour frame buffer device 160x50 May 17 00:26:12.340654 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 17 00:26:12.345311 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:26:12.345508 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:26:12.355870 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:26:12.396981 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:26:12.463939 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 00:26:12.469967 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 00:26:12.490978 lvm[1313]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:26:12.524664 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 00:26:12.525047 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:26:12.532980 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 00:26:12.539869 lvm[1316]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:26:12.574441 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 00:26:12.574651 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:26:12.574734 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:26:12.574753 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:26:12.574848 systemd[1]: Reached target machines.target - Containers. May 17 00:26:12.576446 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 00:26:12.585917 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 00:26:12.586986 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 00:26:12.589902 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:26:12.593045 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 00:26:12.596895 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 00:26:12.601028 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 00:26:12.605118 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 00:26:12.615254 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 00:26:12.621802 kernel: loop0: detected capacity change from 0 to 142488 May 17 00:26:12.628204 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:26:12.633390 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 00:26:12.655803 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:26:12.677900 kernel: loop1: detected capacity change from 0 to 140768 May 17 00:26:12.711814 kernel: loop2: detected capacity change from 0 to 8 May 17 00:26:12.733171 kernel: loop3: detected capacity change from 0 to 221472 May 17 00:26:12.764803 kernel: loop4: detected capacity change from 0 to 142488 May 17 00:26:12.783142 kernel: loop5: detected capacity change from 0 to 140768 May 17 00:26:12.799793 kernel: loop6: detected capacity change from 0 to 8 May 17 00:26:12.802790 kernel: loop7: detected capacity change from 0 to 221472 May 17 00:26:12.816961 (sd-merge)[1337]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. May 17 00:26:12.817285 (sd-merge)[1337]: Merged extensions into '/usr'. May 17 00:26:12.819964 systemd[1]: Reloading requested from client PID 1323 ('systemd-sysext') (unit systemd-sysext.service)... May 17 00:26:12.820050 systemd[1]: Reloading... May 17 00:26:12.866819 zram_generator::config[1368]: No configuration found. May 17 00:26:12.964394 ldconfig[1320]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:26:12.968179 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:26:13.015798 systemd[1]: Reloading finished in 195 ms. May 17 00:26:13.029921 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 00:26:13.033831 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 00:26:13.040928 systemd[1]: Starting ensure-sysext.service... May 17 00:26:13.042915 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:26:13.048295 systemd[1]: Reloading requested from client PID 1415 ('systemctl') (unit ensure-sysext.service)... May 17 00:26:13.048369 systemd[1]: Reloading... May 17 00:26:13.063258 systemd-tmpfiles[1416]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:26:13.063491 systemd-tmpfiles[1416]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 00:26:13.064078 systemd-tmpfiles[1416]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:26:13.064271 systemd-tmpfiles[1416]: ACLs are not supported, ignoring. May 17 00:26:13.064322 systemd-tmpfiles[1416]: ACLs are not supported, ignoring. May 17 00:26:13.067489 systemd-tmpfiles[1416]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:26:13.067500 systemd-tmpfiles[1416]: Skipping /boot May 17 00:26:13.073711 systemd-tmpfiles[1416]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:26:13.073723 systemd-tmpfiles[1416]: Skipping /boot May 17 00:26:13.099900 zram_generator::config[1445]: No configuration found. May 17 00:26:13.178184 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:26:13.231298 systemd[1]: Reloading finished in 182 ms. May 17 00:26:13.250113 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:26:13.258036 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:26:13.276982 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 00:26:13.282988 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 00:26:13.287073 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:26:13.289979 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 00:26:13.296174 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:26:13.296301 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:26:13.301465 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:26:13.307050 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:26:13.315047 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:26:13.315584 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:26:13.315707 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:26:13.317733 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:26:13.318011 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:26:13.319830 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:26:13.319951 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:26:13.320712 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:26:13.320919 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:26:13.329232 augenrules[1519]: No rules May 17 00:26:13.330306 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:26:13.331001 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 00:26:13.335067 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:26:13.336646 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:26:13.343001 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:26:13.349033 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:26:13.354806 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:26:13.363244 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:26:13.363629 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:26:13.363729 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:26:13.367066 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 00:26:13.370151 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:26:13.370257 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:26:13.372414 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:26:13.375174 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:26:13.376213 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:26:13.376368 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:26:13.379482 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:26:13.379646 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:26:13.388489 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:26:13.388615 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:26:13.394921 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 00:26:13.398116 systemd[1]: Finished ensure-sysext.service. May 17 00:26:13.399696 systemd-resolved[1509]: Positive Trust Anchors: May 17 00:26:13.399951 systemd-resolved[1509]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:26:13.400017 systemd-resolved[1509]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:26:13.405986 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 17 00:26:13.409264 systemd-resolved[1509]: Using system hostname 'ci-4081-3-3-n-556bea0d1e'. May 17 00:26:13.411152 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:26:13.411733 systemd[1]: Reached target network.target - Network. May 17 00:26:13.413966 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:26:13.414702 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 00:26:13.424667 systemd-networkd[1251]: eth1: Gained IPv6LL May 17 00:26:13.427660 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 00:26:13.431417 systemd[1]: Reached target network-online.target - Network is Online. May 17 00:26:13.439026 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 00:26:13.439674 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:26:13.464635 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 17 00:26:13.466494 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:26:13.467121 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 00:26:13.467622 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 00:26:13.468158 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 00:26:13.468647 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:26:13.468671 systemd[1]: Reached target paths.target - Path Units. May 17 00:26:13.469258 systemd[1]: Reached target time-set.target - System Time Set. May 17 00:26:13.469952 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 00:26:13.470520 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 00:26:13.471046 systemd[1]: Reached target timers.target - Timer Units. May 17 00:26:13.472165 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 00:26:13.474493 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 00:26:13.476024 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 00:26:13.480398 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 00:26:13.482479 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:26:13.483001 systemd[1]: Reached target basic.target - Basic System. May 17 00:26:13.483634 systemd[1]: System is tainted: cgroupsv1 May 17 00:26:13.483677 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 00:26:13.483701 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 00:26:13.485260 systemd[1]: Starting containerd.service - containerd container runtime... May 17 00:26:13.488451 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 17 00:26:13.495912 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 00:26:13.499265 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 00:26:13.502873 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 00:26:13.504590 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 00:26:13.512273 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:26:13.516254 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 00:26:13.518235 jq[1570]: false May 17 00:26:13.527038 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 00:26:13.531627 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 00:26:13.534925 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. May 17 00:26:13.537737 coreos-metadata[1565]: May 17 00:26:13.537 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 May 17 00:26:13.539033 coreos-metadata[1565]: May 17 00:26:13.538 INFO Fetch successful May 17 00:26:13.539547 coreos-metadata[1565]: May 17 00:26:13.539 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 May 17 00:26:13.540241 coreos-metadata[1565]: May 17 00:26:13.540 INFO Fetch successful May 17 00:26:13.541988 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 00:26:13.551924 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 00:26:13.553349 extend-filesystems[1571]: Found loop4 May 17 00:26:13.554987 extend-filesystems[1571]: Found loop5 May 17 00:26:13.554987 extend-filesystems[1571]: Found loop6 May 17 00:26:13.554987 extend-filesystems[1571]: Found loop7 May 17 00:26:13.554987 extend-filesystems[1571]: Found sda May 17 00:26:13.554987 extend-filesystems[1571]: Found sda1 May 17 00:26:13.554987 extend-filesystems[1571]: Found sda2 May 17 00:26:13.554987 extend-filesystems[1571]: Found sda3 May 17 00:26:13.554987 extend-filesystems[1571]: Found usr May 17 00:26:13.554987 extend-filesystems[1571]: Found sda4 May 17 00:26:13.554987 extend-filesystems[1571]: Found sda6 May 17 00:26:13.554987 extend-filesystems[1571]: Found sda7 May 17 00:26:13.554987 extend-filesystems[1571]: Found sda9 May 17 00:26:13.554987 extend-filesystems[1571]: Checking size of /dev/sda9 May 17 00:26:13.559839 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 00:26:13.566302 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:26:13.581102 dbus-daemon[1567]: [system] SELinux support is enabled May 17 00:26:13.569595 systemd[1]: Starting update-engine.service - Update Engine... May 17 00:26:13.586205 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 00:26:13.587591 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 00:26:13.602163 extend-filesystems[1571]: Resized partition /dev/sda9 May 17 00:26:13.604362 update_engine[1595]: I20250517 00:26:13.604241 1595 main.cc:92] Flatcar Update Engine starting May 17 00:26:13.616956 update_engine[1595]: I20250517 00:26:13.614251 1595 update_check_scheduler.cc:74] Next update check in 9m43s May 17 00:26:13.616986 extend-filesystems[1610]: resize2fs 1.47.1 (20-May-2024) May 17 00:26:13.608266 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:26:13.608446 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 00:26:13.617759 jq[1601]: true May 17 00:26:13.627976 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks May 17 00:26:13.621939 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:26:13.622120 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 00:26:13.622675 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 00:26:13.633207 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:26:13.633375 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 00:26:13.661186 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:26:13.661215 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 00:26:13.663202 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:26:13.663223 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 00:26:13.663815 (ntainerd)[1620]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 00:26:13.666511 systemd[1]: Started update-engine.service - Update Engine. May 17 00:26:13.667212 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:26:13.668619 systemd-logind[1593]: New seat seat0. May 17 00:26:13.669093 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 00:26:14.423749 systemd-resolved[1509]: Clock change detected. Flushing caches. May 17 00:26:14.423959 systemd-timesyncd[1554]: Contacted time server 162.159.200.123:123 (0.flatcar.pool.ntp.org). May 17 00:26:14.423996 systemd-timesyncd[1554]: Initial clock synchronization to Sat 2025-05-17 00:26:14.423708 UTC. May 17 00:26:14.424254 tar[1616]: linux-amd64/helm May 17 00:26:14.434224 systemd-logind[1593]: Watching system buttons on /dev/input/event2 (Power Button) May 17 00:26:14.434241 systemd-logind[1593]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:26:14.437031 jq[1619]: true May 17 00:26:14.438727 systemd[1]: Started systemd-logind.service - User Login Management. May 17 00:26:14.455052 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1250) May 17 00:26:14.504607 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 17 00:26:14.509223 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 17 00:26:14.591603 kernel: EXT4-fs (sda9): resized filesystem to 9393147 May 17 00:26:14.634409 bash[1660]: Updated "/home/core/.ssh/authorized_keys" May 17 00:26:14.595482 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 00:26:14.607308 locksmithd[1634]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:26:14.609429 systemd[1]: Starting sshkeys.service... May 17 00:26:14.625966 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 17 00:26:14.634444 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 17 00:26:14.640304 extend-filesystems[1610]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 17 00:26:14.640304 extend-filesystems[1610]: old_desc_blocks = 1, new_desc_blocks = 5 May 17 00:26:14.640304 extend-filesystems[1610]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. May 17 00:26:14.643220 extend-filesystems[1571]: Resized filesystem in /dev/sda9 May 17 00:26:14.643220 extend-filesystems[1571]: Found sr0 May 17 00:26:14.641409 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:26:14.641589 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 00:26:14.677612 coreos-metadata[1673]: May 17 00:26:14.676 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 May 17 00:26:14.678290 coreos-metadata[1673]: May 17 00:26:14.678 INFO Fetch successful May 17 00:26:14.680549 unknown[1673]: wrote ssh authorized keys file for user: core May 17 00:26:14.701646 update-ssh-keys[1682]: Updated "/home/core/.ssh/authorized_keys" May 17 00:26:14.705173 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 17 00:26:14.710596 systemd[1]: Finished sshkeys.service. May 17 00:26:14.714596 containerd[1620]: time="2025-05-17T00:26:14.714337651Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 00:26:14.726294 sshd_keygen[1614]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:26:14.749203 containerd[1620]: time="2025-05-17T00:26:14.747862484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:26:14.749203 containerd[1620]: time="2025-05-17T00:26:14.749072894Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:26:14.749203 containerd[1620]: time="2025-05-17T00:26:14.749093092Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:26:14.749203 containerd[1620]: time="2025-05-17T00:26:14.749129690Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:26:14.749290 containerd[1620]: time="2025-05-17T00:26:14.749245518Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 00:26:14.749290 containerd[1620]: time="2025-05-17T00:26:14.749259704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 00:26:14.749569 containerd[1620]: time="2025-05-17T00:26:14.749326048Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:26:14.749569 containerd[1620]: time="2025-05-17T00:26:14.749341297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:26:14.749569 containerd[1620]: time="2025-05-17T00:26:14.749497831Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:26:14.749569 containerd[1620]: time="2025-05-17T00:26:14.749509993Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:26:14.749569 containerd[1620]: time="2025-05-17T00:26:14.749520243Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:26:14.749569 containerd[1620]: time="2025-05-17T00:26:14.749527647Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:26:14.749658 containerd[1620]: time="2025-05-17T00:26:14.749582419Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:26:14.750327 containerd[1620]: time="2025-05-17T00:26:14.749731950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:26:14.750327 containerd[1620]: time="2025-05-17T00:26:14.749829382Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:26:14.750327 containerd[1620]: time="2025-05-17T00:26:14.749840393Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:26:14.750327 containerd[1620]: time="2025-05-17T00:26:14.749900676Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:26:14.750327 containerd[1620]: time="2025-05-17T00:26:14.749935261Z" level=info msg="metadata content store policy set" policy=shared May 17 00:26:14.756131 containerd[1620]: time="2025-05-17T00:26:14.756081466Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:26:14.756165 containerd[1620]: time="2025-05-17T00:26:14.756154703Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:26:14.756377 containerd[1620]: time="2025-05-17T00:26:14.756170973Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 00:26:14.756377 containerd[1620]: time="2025-05-17T00:26:14.756184098Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 00:26:14.756377 containerd[1620]: time="2025-05-17T00:26:14.756195029Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:26:14.756377 containerd[1620]: time="2025-05-17T00:26:14.756313050Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:26:14.758467 containerd[1620]: time="2025-05-17T00:26:14.758240674Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:26:14.758467 containerd[1620]: time="2025-05-17T00:26:14.758334921Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 00:26:14.758467 containerd[1620]: time="2025-05-17T00:26:14.758348767Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 00:26:14.758467 containerd[1620]: time="2025-05-17T00:26:14.758359397Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 00:26:14.758467 containerd[1620]: time="2025-05-17T00:26:14.758376188Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:26:14.758467 containerd[1620]: time="2025-05-17T00:26:14.758386197Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:26:14.758467 containerd[1620]: time="2025-05-17T00:26:14.758394964Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:26:14.758467 containerd[1620]: time="2025-05-17T00:26:14.758404802Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:26:14.758467 containerd[1620]: time="2025-05-17T00:26:14.758415633Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:26:14.758467 containerd[1620]: time="2025-05-17T00:26:14.758425671Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:26:14.758467 containerd[1620]: time="2025-05-17T00:26:14.758435700Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:26:14.758467 containerd[1620]: time="2025-05-17T00:26:14.758443785Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:26:14.758467 containerd[1620]: time="2025-05-17T00:26:14.758459174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:26:14.758467 containerd[1620]: time="2025-05-17T00:26:14.758468973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:26:14.758667 containerd[1620]: time="2025-05-17T00:26:14.758478591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:26:14.758667 containerd[1620]: time="2025-05-17T00:26:14.758492436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:26:14.758667 containerd[1620]: time="2025-05-17T00:26:14.758501915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:26:14.758667 containerd[1620]: time="2025-05-17T00:26:14.758511111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:26:14.758667 containerd[1620]: time="2025-05-17T00:26:14.758519407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:26:14.758667 containerd[1620]: time="2025-05-17T00:26:14.758528674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:26:14.758667 containerd[1620]: time="2025-05-17T00:26:14.758538653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 00:26:14.758667 containerd[1620]: time="2025-05-17T00:26:14.758549884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 00:26:14.758667 containerd[1620]: time="2025-05-17T00:26:14.758559011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:26:14.758667 containerd[1620]: time="2025-05-17T00:26:14.758568088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 00:26:14.758667 containerd[1620]: time="2025-05-17T00:26:14.758576815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:26:14.758667 containerd[1620]: time="2025-05-17T00:26:14.758587595Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 00:26:14.758667 containerd[1620]: time="2025-05-17T00:26:14.758602733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 00:26:14.758667 containerd[1620]: time="2025-05-17T00:26:14.758610979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:26:14.758667 containerd[1620]: time="2025-05-17T00:26:14.758619996Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:26:14.758865 containerd[1620]: time="2025-05-17T00:26:14.758665271Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:26:14.758865 containerd[1620]: time="2025-05-17T00:26:14.758677524Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 00:26:14.758865 containerd[1620]: time="2025-05-17T00:26:14.758686250Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:26:14.758865 containerd[1620]: time="2025-05-17T00:26:14.758695026Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 00:26:14.758865 containerd[1620]: time="2025-05-17T00:26:14.758701909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:26:14.758865 containerd[1620]: time="2025-05-17T00:26:14.758713721Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 00:26:14.758865 containerd[1620]: time="2025-05-17T00:26:14.758721406Z" level=info msg="NRI interface is disabled by configuration." May 17 00:26:14.758865 containerd[1620]: time="2025-05-17T00:26:14.758729181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:26:14.758973 containerd[1620]: time="2025-05-17T00:26:14.758925458Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:26:14.758973 containerd[1620]: time="2025-05-17T00:26:14.758970072Z" level=info msg="Connect containerd service" May 17 00:26:14.759765 containerd[1620]: time="2025-05-17T00:26:14.758997704Z" level=info msg="using legacy CRI server" May 17 00:26:14.759765 containerd[1620]: time="2025-05-17T00:26:14.759003465Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 00:26:14.759765 containerd[1620]: time="2025-05-17T00:26:14.759088364Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:26:14.759765 containerd[1620]: time="2025-05-17T00:26:14.759442167Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:26:14.760637 containerd[1620]: time="2025-05-17T00:26:14.760462050Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:26:14.760637 containerd[1620]: time="2025-05-17T00:26:14.760502335Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:26:14.760637 containerd[1620]: time="2025-05-17T00:26:14.760572197Z" level=info msg="Start subscribing containerd event" May 17 00:26:14.760637 containerd[1620]: time="2025-05-17T00:26:14.760599488Z" level=info msg="Start recovering state" May 17 00:26:14.760705 containerd[1620]: time="2025-05-17T00:26:14.760650273Z" level=info msg="Start event monitor" May 17 00:26:14.760705 containerd[1620]: time="2025-05-17T00:26:14.760661965Z" level=info msg="Start snapshots syncer" May 17 00:26:14.760705 containerd[1620]: time="2025-05-17T00:26:14.760669528Z" level=info msg="Start cni network conf syncer for default" May 17 00:26:14.760705 containerd[1620]: time="2025-05-17T00:26:14.760675079Z" level=info msg="Start streaming server" May 17 00:26:14.760796 systemd[1]: Started containerd.service - containerd container runtime. May 17 00:26:14.765568 containerd[1620]: time="2025-05-17T00:26:14.765489957Z" level=info msg="containerd successfully booted in 0.061941s" May 17 00:26:14.773148 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 00:26:14.787318 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 00:26:14.796676 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:26:14.796860 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 00:26:14.809314 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 00:26:14.819282 systemd-networkd[1251]: eth0: Gained IPv6LL May 17 00:26:14.826599 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 00:26:14.836304 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 00:26:14.841728 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 17 00:26:14.842186 systemd[1]: Reached target getty.target - Login Prompts. May 17 00:26:15.022591 tar[1616]: linux-amd64/LICENSE May 17 00:26:15.022900 tar[1616]: linux-amd64/README.md May 17 00:26:15.032469 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 00:26:15.514876 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:26:15.519550 (kubelet)[1724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:26:15.519842 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 00:26:15.524251 systemd[1]: Startup finished in 8.413s (kernel) + 3.871s (userspace) = 12.285s. May 17 00:26:16.024560 kubelet[1724]: E0517 00:26:16.024509 1724 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:26:16.026687 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:26:16.026867 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:26:26.277687 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:26:26.284583 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:26:26.390512 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:26:26.392773 (kubelet)[1748]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:26:26.422832 kubelet[1748]: E0517 00:26:26.422781 1748 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:26:26.426543 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:26:26.426718 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:26:36.677583 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:26:36.684280 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:26:36.790322 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:26:36.793503 (kubelet)[1768]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:26:36.825472 kubelet[1768]: E0517 00:26:36.825380 1768 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:26:36.827648 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:26:36.827854 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:26:47.036305 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 17 00:26:47.042157 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:26:47.135689 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:26:47.138227 (kubelet)[1788]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:26:47.178312 kubelet[1788]: E0517 00:26:47.178264 1788 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:26:47.180015 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:26:47.180220 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:26:57.286649 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 17 00:26:57.299270 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:26:57.406140 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:26:57.408333 (kubelet)[1808]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:26:57.439371 kubelet[1808]: E0517 00:26:57.439313 1808 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:26:57.441176 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:26:57.441325 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:26:59.283991 update_engine[1595]: I20250517 00:26:59.283857 1595 update_attempter.cc:509] Updating boot flags... May 17 00:26:59.343143 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1825) May 17 00:26:59.379068 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1828) May 17 00:27:07.536609 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 17 00:27:07.543603 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:27:07.651163 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:27:07.654133 (kubelet)[1846]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:27:07.684406 kubelet[1846]: E0517 00:27:07.684317 1846 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:27:07.685835 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:27:07.685981 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:27:17.786262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 17 00:27:17.791369 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:27:17.868143 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:27:17.870858 (kubelet)[1866]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:27:17.900258 kubelet[1866]: E0517 00:27:17.900165 1866 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:27:17.901978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:27:17.902145 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:27:28.036281 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. May 17 00:27:28.041182 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:27:28.124969 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:27:28.127292 (kubelet)[1886]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:27:28.154865 kubelet[1886]: E0517 00:27:28.154820 1886 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:27:28.156382 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:27:28.156524 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:27:38.286286 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. May 17 00:27:38.291161 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:27:38.393263 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:27:38.396389 (kubelet)[1906]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:27:38.424711 kubelet[1906]: E0517 00:27:38.424648 1906 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:27:38.426402 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:27:38.426543 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:27:48.536281 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. May 17 00:27:48.541188 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:27:48.629176 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:27:48.632064 (kubelet)[1927]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:27:48.661833 kubelet[1927]: E0517 00:27:48.661792 1927 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:27:48.663571 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:27:48.663731 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:27:58.386681 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 00:27:58.398276 systemd[1]: Started sshd@0-135.181.90.190:22-139.178.89.65:36486.service - OpenSSH per-connection server daemon (139.178.89.65:36486). May 17 00:27:58.786520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. May 17 00:27:58.799397 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:27:58.907846 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:27:58.910375 (kubelet)[1950]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:27:58.941224 kubelet[1950]: E0517 00:27:58.941179 1950 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:27:58.943627 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:27:58.943766 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:27:59.372173 sshd[1936]: Accepted publickey for core from 139.178.89.65 port 36486 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:27:59.374094 sshd[1936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:27:59.381650 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 00:27:59.388212 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 00:27:59.390561 systemd-logind[1593]: New session 1 of user core. May 17 00:27:59.400737 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 00:27:59.405411 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 00:27:59.409063 (systemd)[1963]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:27:59.497904 systemd[1963]: Queued start job for default target default.target. May 17 00:27:59.498269 systemd[1963]: Created slice app.slice - User Application Slice. May 17 00:27:59.498290 systemd[1963]: Reached target paths.target - Paths. May 17 00:27:59.498300 systemd[1963]: Reached target timers.target - Timers. May 17 00:27:59.508101 systemd[1963]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 00:27:59.514036 systemd[1963]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 00:27:59.514082 systemd[1963]: Reached target sockets.target - Sockets. May 17 00:27:59.514094 systemd[1963]: Reached target basic.target - Basic System. May 17 00:27:59.514126 systemd[1963]: Reached target default.target - Main User Target. May 17 00:27:59.514147 systemd[1963]: Startup finished in 100ms. May 17 00:27:59.514444 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 00:27:59.521253 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 00:28:00.208285 systemd[1]: Started sshd@1-135.181.90.190:22-139.178.89.65:36494.service - OpenSSH per-connection server daemon (139.178.89.65:36494). May 17 00:28:01.170316 sshd[1975]: Accepted publickey for core from 139.178.89.65 port 36494 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:28:01.171580 sshd[1975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:28:01.176394 systemd-logind[1593]: New session 2 of user core. May 17 00:28:01.182225 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 00:28:01.844161 sshd[1975]: pam_unix(sshd:session): session closed for user core May 17 00:28:01.846698 systemd[1]: sshd@1-135.181.90.190:22-139.178.89.65:36494.service: Deactivated successfully. May 17 00:28:01.849524 systemd-logind[1593]: Session 2 logged out. Waiting for processes to exit. May 17 00:28:01.849906 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:28:01.851452 systemd-logind[1593]: Removed session 2. May 17 00:28:02.006202 systemd[1]: Started sshd@2-135.181.90.190:22-139.178.89.65:36500.service - OpenSSH per-connection server daemon (139.178.89.65:36500). May 17 00:28:02.969291 sshd[1983]: Accepted publickey for core from 139.178.89.65 port 36500 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:28:02.970809 sshd[1983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:28:02.975867 systemd-logind[1593]: New session 3 of user core. May 17 00:28:02.979295 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 00:28:03.637581 sshd[1983]: pam_unix(sshd:session): session closed for user core May 17 00:28:03.640223 systemd[1]: sshd@2-135.181.90.190:22-139.178.89.65:36500.service: Deactivated successfully. May 17 00:28:03.643281 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:28:03.643488 systemd-logind[1593]: Session 3 logged out. Waiting for processes to exit. May 17 00:28:03.644769 systemd-logind[1593]: Removed session 3. May 17 00:28:03.798388 systemd[1]: Started sshd@3-135.181.90.190:22-139.178.89.65:36504.service - OpenSSH per-connection server daemon (139.178.89.65:36504). May 17 00:28:04.762938 sshd[1991]: Accepted publickey for core from 139.178.89.65 port 36504 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:28:04.764205 sshd[1991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:28:04.768632 systemd-logind[1593]: New session 4 of user core. May 17 00:28:04.772255 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 00:28:05.432140 sshd[1991]: pam_unix(sshd:session): session closed for user core May 17 00:28:05.434847 systemd[1]: sshd@3-135.181.90.190:22-139.178.89.65:36504.service: Deactivated successfully. May 17 00:28:05.437317 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:28:05.437917 systemd-logind[1593]: Session 4 logged out. Waiting for processes to exit. May 17 00:28:05.438743 systemd-logind[1593]: Removed session 4. May 17 00:28:05.594457 systemd[1]: Started sshd@4-135.181.90.190:22-139.178.89.65:36514.service - OpenSSH per-connection server daemon (139.178.89.65:36514). May 17 00:28:06.556257 sshd[1999]: Accepted publickey for core from 139.178.89.65 port 36514 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:28:06.557469 sshd[1999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:28:06.561648 systemd-logind[1593]: New session 5 of user core. May 17 00:28:06.570449 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 00:28:07.079830 sudo[2003]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:28:07.080129 sudo[2003]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:28:07.094602 sudo[2003]: pam_unix(sudo:session): session closed for user root May 17 00:28:07.251792 sshd[1999]: pam_unix(sshd:session): session closed for user core May 17 00:28:07.254467 systemd[1]: sshd@4-135.181.90.190:22-139.178.89.65:36514.service: Deactivated successfully. May 17 00:28:07.257456 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:28:07.257485 systemd-logind[1593]: Session 5 logged out. Waiting for processes to exit. May 17 00:28:07.259406 systemd-logind[1593]: Removed session 5. May 17 00:28:07.413241 systemd[1]: Started sshd@5-135.181.90.190:22-139.178.89.65:42940.service - OpenSSH per-connection server daemon (139.178.89.65:42940). May 17 00:28:08.377946 sshd[2008]: Accepted publickey for core from 139.178.89.65 port 42940 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:28:08.379557 sshd[2008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:28:08.383513 systemd-logind[1593]: New session 6 of user core. May 17 00:28:08.392244 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 00:28:08.894691 sudo[2013]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:28:08.895119 sudo[2013]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:28:08.898356 sudo[2013]: pam_unix(sudo:session): session closed for user root May 17 00:28:08.902573 sudo[2012]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:28:08.902932 sudo[2012]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:28:08.919252 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 00:28:08.920325 auditctl[2016]: No rules May 17 00:28:08.920576 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:28:08.920748 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 00:28:08.924264 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:28:08.943110 augenrules[2035]: No rules May 17 00:28:08.944005 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:28:08.945553 sudo[2012]: pam_unix(sudo:session): session closed for user root May 17 00:28:08.945836 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. May 17 00:28:08.952152 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:28:09.050174 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:28:09.052498 (kubelet)[2053]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:28:09.085245 kubelet[2053]: E0517 00:28:09.085161 2053 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:28:09.086673 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:28:09.086817 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:28:09.103418 sshd[2008]: pam_unix(sshd:session): session closed for user core May 17 00:28:09.105498 systemd[1]: sshd@5-135.181.90.190:22-139.178.89.65:42940.service: Deactivated successfully. May 17 00:28:09.108099 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:28:09.108752 systemd-logind[1593]: Session 6 logged out. Waiting for processes to exit. May 17 00:28:09.109678 systemd-logind[1593]: Removed session 6. May 17 00:28:09.265231 systemd[1]: Started sshd@6-135.181.90.190:22-139.178.89.65:42952.service - OpenSSH per-connection server daemon (139.178.89.65:42952). May 17 00:28:10.232123 sshd[2064]: Accepted publickey for core from 139.178.89.65 port 42952 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:28:10.233500 sshd[2064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:28:10.237830 systemd-logind[1593]: New session 7 of user core. May 17 00:28:10.248308 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 00:28:10.748079 sudo[2068]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:28:10.748374 sudo[2068]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:28:10.975184 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 00:28:10.975567 (dockerd)[2083]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 00:28:11.213321 dockerd[2083]: time="2025-05-17T00:28:11.213266115Z" level=info msg="Starting up" May 17 00:28:11.309631 dockerd[2083]: time="2025-05-17T00:28:11.309597608Z" level=info msg="Loading containers: start." May 17 00:28:11.404060 kernel: Initializing XFRM netlink socket May 17 00:28:11.470765 systemd-networkd[1251]: docker0: Link UP May 17 00:28:11.481972 dockerd[2083]: time="2025-05-17T00:28:11.481914234Z" level=info msg="Loading containers: done." May 17 00:28:11.495714 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2662391508-merged.mount: Deactivated successfully. May 17 00:28:11.496470 dockerd[2083]: time="2025-05-17T00:28:11.495946955Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:28:11.496470 dockerd[2083]: time="2025-05-17T00:28:11.496095184Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 00:28:11.496470 dockerd[2083]: time="2025-05-17T00:28:11.496252050Z" level=info msg="Daemon has completed initialization" May 17 00:28:11.520284 dockerd[2083]: time="2025-05-17T00:28:11.520241263Z" level=info msg="API listen on /run/docker.sock" May 17 00:28:11.520487 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 00:28:12.540357 containerd[1620]: time="2025-05-17T00:28:12.540299152Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 17 00:28:13.063780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3506784173.mount: Deactivated successfully. May 17 00:28:14.095847 containerd[1620]: time="2025-05-17T00:28:14.095798964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:14.096690 containerd[1620]: time="2025-05-17T00:28:14.096662971Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=28078939" May 17 00:28:14.097449 containerd[1620]: time="2025-05-17T00:28:14.097411953Z" level=info msg="ImageCreate event name:\"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:14.100079 containerd[1620]: time="2025-05-17T00:28:14.100047060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:14.100876 containerd[1620]: time="2025-05-17T00:28:14.100757939Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"28075645\" in 1.56042268s" May 17 00:28:14.100876 containerd[1620]: time="2025-05-17T00:28:14.100784469Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 17 00:28:14.101281 containerd[1620]: time="2025-05-17T00:28:14.101267430Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 17 00:28:15.355152 containerd[1620]: time="2025-05-17T00:28:15.355093212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:15.356259 containerd[1620]: time="2025-05-17T00:28:15.356213793Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=24713544" May 17 00:28:15.356677 containerd[1620]: time="2025-05-17T00:28:15.356635739Z" level=info msg="ImageCreate event name:\"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:15.359630 containerd[1620]: time="2025-05-17T00:28:15.359595526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:15.360772 containerd[1620]: time="2025-05-17T00:28:15.360668128Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"26315362\" in 1.259332419s" May 17 00:28:15.360772 containerd[1620]: time="2025-05-17T00:28:15.360692774Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 17 00:28:15.361207 containerd[1620]: time="2025-05-17T00:28:15.361193137Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 17 00:28:16.314554 containerd[1620]: time="2025-05-17T00:28:16.314494348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:16.315741 containerd[1620]: time="2025-05-17T00:28:16.315694569Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=18784333" May 17 00:28:16.316003 containerd[1620]: time="2025-05-17T00:28:16.315951695Z" level=info msg="ImageCreate event name:\"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:16.318109 containerd[1620]: time="2025-05-17T00:28:16.318069826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:16.318846 containerd[1620]: time="2025-05-17T00:28:16.318820470Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"20386169\" in 957.542533ms" May 17 00:28:16.318895 containerd[1620]: time="2025-05-17T00:28:16.318848343Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 17 00:28:16.320184 containerd[1620]: time="2025-05-17T00:28:16.320154144Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 00:28:17.237887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1974038170.mount: Deactivated successfully. May 17 00:28:17.521392 containerd[1620]: time="2025-05-17T00:28:17.520664383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:17.545171 containerd[1620]: time="2025-05-17T00:28:17.545141351Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=30355651" May 17 00:28:17.548169 containerd[1620]: time="2025-05-17T00:28:17.548143346Z" level=info msg="ImageCreate event name:\"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:17.549633 containerd[1620]: time="2025-05-17T00:28:17.549609210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:17.550221 containerd[1620]: time="2025-05-17T00:28:17.550128687Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"30354642\" in 1.229944446s" May 17 00:28:17.550221 containerd[1620]: time="2025-05-17T00:28:17.550153764Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 17 00:28:17.550773 containerd[1620]: time="2025-05-17T00:28:17.550605796Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:28:18.036625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2801229525.mount: Deactivated successfully. May 17 00:28:18.705904 containerd[1620]: time="2025-05-17T00:28:18.705858827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:18.706801 containerd[1620]: time="2025-05-17T00:28:18.706763753Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565335" May 17 00:28:18.707410 containerd[1620]: time="2025-05-17T00:28:18.707369705Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:18.710070 containerd[1620]: time="2025-05-17T00:28:18.710035526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:18.710917 containerd[1620]: time="2025-05-17T00:28:18.710816498Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.160151771s" May 17 00:28:18.710917 containerd[1620]: time="2025-05-17T00:28:18.710843268Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 00:28:18.711460 containerd[1620]: time="2025-05-17T00:28:18.711326077Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:28:19.157780 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. May 17 00:28:19.165453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:28:19.167019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2790545418.mount: Deactivated successfully. May 17 00:28:19.176615 containerd[1620]: time="2025-05-17T00:28:19.175910358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:19.177364 containerd[1620]: time="2025-05-17T00:28:19.177305707Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" May 17 00:28:19.178140 containerd[1620]: time="2025-05-17T00:28:19.178062432Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:19.180104 containerd[1620]: time="2025-05-17T00:28:19.180049656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:19.180858 containerd[1620]: time="2025-05-17T00:28:19.180733614Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 469.385234ms" May 17 00:28:19.180858 containerd[1620]: time="2025-05-17T00:28:19.180763771Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:28:19.181663 containerd[1620]: time="2025-05-17T00:28:19.181484669Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 17 00:28:19.274978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:28:19.285350 (kubelet)[2360]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:28:19.321555 kubelet[2360]: E0517 00:28:19.321519 2360 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:28:19.323734 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:28:19.323882 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:28:19.689302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount129924506.mount: Deactivated successfully. May 17 00:28:21.221969 containerd[1620]: time="2025-05-17T00:28:21.221915925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:21.222899 containerd[1620]: time="2025-05-17T00:28:21.222854763Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780083" May 17 00:28:21.223632 containerd[1620]: time="2025-05-17T00:28:21.223579408Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:21.225997 containerd[1620]: time="2025-05-17T00:28:21.225919735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:21.227050 containerd[1620]: time="2025-05-17T00:28:21.226922023Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.045413549s" May 17 00:28:21.227050 containerd[1620]: time="2025-05-17T00:28:21.226950777Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 17 00:28:24.078246 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:28:24.088184 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:28:24.108906 systemd[1]: Reloading requested from client PID 2451 ('systemctl') (unit session-7.scope)... May 17 00:28:24.108916 systemd[1]: Reloading... May 17 00:28:24.170067 zram_generator::config[2489]: No configuration found. May 17 00:28:24.254237 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:28:24.309932 systemd[1]: Reloading finished in 200 ms. May 17 00:28:24.341337 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 00:28:24.341855 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 00:28:24.342247 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:28:24.343753 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:28:24.442143 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:28:24.445927 (kubelet)[2554]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:28:24.476432 kubelet[2554]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:28:24.476432 kubelet[2554]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:28:24.476432 kubelet[2554]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:28:24.477877 kubelet[2554]: I0517 00:28:24.477840 2554 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:28:24.990503 kubelet[2554]: I0517 00:28:24.990466 2554 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:28:24.990503 kubelet[2554]: I0517 00:28:24.990489 2554 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:28:24.990697 kubelet[2554]: I0517 00:28:24.990679 2554 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:28:25.012854 kubelet[2554]: I0517 00:28:25.012821 2554 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:28:25.016421 kubelet[2554]: E0517 00:28:25.016389 2554 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://135.181.90.190:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 135.181.90.190:6443: connect: connection refused" logger="UnhandledError" May 17 00:28:25.023041 kubelet[2554]: E0517 00:28:25.023003 2554 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:28:25.023086 kubelet[2554]: I0517 00:28:25.023042 2554 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:28:25.030876 kubelet[2554]: I0517 00:28:25.030769 2554 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:28:25.035136 kubelet[2554]: I0517 00:28:25.035105 2554 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:28:25.035289 kubelet[2554]: I0517 00:28:25.035254 2554 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:28:25.035461 kubelet[2554]: I0517 00:28:25.035283 2554 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-556bea0d1e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:28:25.035461 kubelet[2554]: I0517 00:28:25.035460 2554 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:28:25.035548 kubelet[2554]: I0517 00:28:25.035470 2554 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:28:25.035548 kubelet[2554]: I0517 00:28:25.035543 2554 state_mem.go:36] "Initialized new in-memory state store" May 17 00:28:25.038544 kubelet[2554]: I0517 00:28:25.038353 2554 kubelet.go:408] "Attempting to sync node with API server" May 17 00:28:25.038544 kubelet[2554]: I0517 00:28:25.038374 2554 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:28:25.038544 kubelet[2554]: I0517 00:28:25.038402 2554 kubelet.go:314] "Adding apiserver pod source" May 17 00:28:25.038544 kubelet[2554]: I0517 00:28:25.038416 2554 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:28:25.043651 kubelet[2554]: W0517 00:28:25.043616 2554 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://135.181.90.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-556bea0d1e&limit=500&resourceVersion=0": dial tcp 135.181.90.190:6443: connect: connection refused May 17 00:28:25.043695 kubelet[2554]: E0517 00:28:25.043660 2554 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://135.181.90.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-556bea0d1e&limit=500&resourceVersion=0\": dial tcp 135.181.90.190:6443: connect: connection refused" logger="UnhandledError" May 17 00:28:25.045131 kubelet[2554]: W0517 00:28:25.045012 2554 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://135.181.90.190:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 135.181.90.190:6443: connect: connection refused May 17 00:28:25.045131 kubelet[2554]: E0517 00:28:25.045057 2554 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://135.181.90.190:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 135.181.90.190:6443: connect: connection refused" logger="UnhandledError" May 17 00:28:25.045131 kubelet[2554]: I0517 00:28:25.045066 2554 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:28:25.047471 kubelet[2554]: I0517 00:28:25.047451 2554 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:28:25.048001 kubelet[2554]: W0517 00:28:25.047978 2554 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:28:25.048448 kubelet[2554]: I0517 00:28:25.048429 2554 server.go:1274] "Started kubelet" May 17 00:28:25.048965 kubelet[2554]: I0517 00:28:25.048942 2554 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:28:25.049582 kubelet[2554]: I0517 00:28:25.049560 2554 server.go:449] "Adding debug handlers to kubelet server" May 17 00:28:25.052340 kubelet[2554]: I0517 00:28:25.052315 2554 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:28:25.054218 kubelet[2554]: I0517 00:28:25.053838 2554 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:28:25.054218 kubelet[2554]: I0517 00:28:25.053984 2554 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:28:25.057370 kubelet[2554]: I0517 00:28:25.056656 2554 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:28:25.057370 kubelet[2554]: E0517 00:28:25.054163 2554 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://135.181.90.190:6443/api/v1/namespaces/default/events\": dial tcp 135.181.90.190:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-n-556bea0d1e.184028f8ef1d1c0a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-n-556bea0d1e,UID:ci-4081-3-3-n-556bea0d1e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-556bea0d1e,},FirstTimestamp:2025-05-17 00:28:25.048415242 +0000 UTC m=+0.599821125,LastTimestamp:2025-05-17 00:28:25.048415242 +0000 UTC m=+0.599821125,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-556bea0d1e,}" May 17 00:28:25.057849 kubelet[2554]: I0517 00:28:25.057825 2554 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:28:25.057967 kubelet[2554]: E0517 00:28:25.057948 2554 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-556bea0d1e\" not found" May 17 00:28:25.059442 kubelet[2554]: E0517 00:28:25.059195 2554 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://135.181.90.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-556bea0d1e?timeout=10s\": dial tcp 135.181.90.190:6443: connect: connection refused" interval="200ms" May 17 00:28:25.060669 kubelet[2554]: I0517 00:28:25.060658 2554 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:28:25.061472 kubelet[2554]: I0517 00:28:25.060812 2554 factory.go:221] Registration of the containerd container factory successfully May 17 00:28:25.061472 kubelet[2554]: I0517 00:28:25.060823 2554 factory.go:221] Registration of the systemd container factory successfully May 17 00:28:25.061472 kubelet[2554]: I0517 00:28:25.060874 2554 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:28:25.066637 kubelet[2554]: I0517 00:28:25.060688 2554 reconciler.go:26] "Reconciler: start to sync state" May 17 00:28:25.070180 kubelet[2554]: I0517 00:28:25.070146 2554 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:28:25.070873 kubelet[2554]: I0517 00:28:25.070847 2554 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:28:25.070873 kubelet[2554]: I0517 00:28:25.070867 2554 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:28:25.070925 kubelet[2554]: I0517 00:28:25.070879 2554 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:28:25.070925 kubelet[2554]: E0517 00:28:25.070906 2554 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:28:25.075674 kubelet[2554]: W0517 00:28:25.075640 2554 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://135.181.90.190:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 135.181.90.190:6443: connect: connection refused May 17 00:28:25.075720 kubelet[2554]: E0517 00:28:25.075679 2554 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://135.181.90.190:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 135.181.90.190:6443: connect: connection refused" logger="UnhandledError" May 17 00:28:25.075778 kubelet[2554]: E0517 00:28:25.075758 2554 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:28:25.075845 kubelet[2554]: W0517 00:28:25.075820 2554 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://135.181.90.190:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 135.181.90.190:6443: connect: connection refused May 17 00:28:25.075867 kubelet[2554]: E0517 00:28:25.075847 2554 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://135.181.90.190:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 135.181.90.190:6443: connect: connection refused" logger="UnhandledError" May 17 00:28:25.090416 kubelet[2554]: I0517 00:28:25.090387 2554 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:28:25.090416 kubelet[2554]: I0517 00:28:25.090399 2554 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:28:25.090682 kubelet[2554]: I0517 00:28:25.090541 2554 state_mem.go:36] "Initialized new in-memory state store" May 17 00:28:25.092928 kubelet[2554]: I0517 00:28:25.092883 2554 policy_none.go:49] "None policy: Start" May 17 00:28:25.093298 kubelet[2554]: I0517 00:28:25.093284 2554 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:28:25.093298 kubelet[2554]: I0517 00:28:25.093299 2554 state_mem.go:35] "Initializing new in-memory state store" May 17 00:28:25.096360 kubelet[2554]: I0517 00:28:25.096306 2554 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:28:25.096482 kubelet[2554]: I0517 00:28:25.096423 2554 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:28:25.096482 kubelet[2554]: I0517 00:28:25.096433 2554 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:28:25.097394 kubelet[2554]: I0517 00:28:25.097308 2554 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:28:25.098167 kubelet[2554]: E0517 00:28:25.098156 2554 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-3-n-556bea0d1e\" not found" May 17 00:28:25.197919 kubelet[2554]: I0517 00:28:25.197879 2554 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-556bea0d1e" May 17 00:28:25.198249 kubelet[2554]: E0517 00:28:25.198210 2554 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://135.181.90.190:6443/api/v1/nodes\": dial tcp 135.181.90.190:6443: connect: connection refused" node="ci-4081-3-3-n-556bea0d1e" May 17 00:28:25.259651 kubelet[2554]: E0517 00:28:25.259565 2554 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://135.181.90.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-556bea0d1e?timeout=10s\": dial tcp 135.181.90.190:6443: connect: connection refused" interval="400ms" May 17 00:28:25.268930 kubelet[2554]: I0517 00:28:25.268880 2554 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bfe7a7960296c153785a2e1f59dc14fe-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-556bea0d1e\" (UID: \"bfe7a7960296c153785a2e1f59dc14fe\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-556bea0d1e" May 17 00:28:25.268930 kubelet[2554]: I0517 00:28:25.268907 2554 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/766a9442c2c8e9b92d231aa58b7beac6-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-556bea0d1e\" (UID: \"766a9442c2c8e9b92d231aa58b7beac6\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-556bea0d1e" May 17 00:28:25.268930 kubelet[2554]: I0517 00:28:25.268926 2554 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/766a9442c2c8e9b92d231aa58b7beac6-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-556bea0d1e\" (UID: \"766a9442c2c8e9b92d231aa58b7beac6\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-556bea0d1e" May 17 00:28:25.269120 kubelet[2554]: I0517 00:28:25.268960 2554 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/766a9442c2c8e9b92d231aa58b7beac6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-556bea0d1e\" (UID: \"766a9442c2c8e9b92d231aa58b7beac6\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-556bea0d1e" May 17 00:28:25.269120 kubelet[2554]: I0517 00:28:25.268976 2554 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bfe7a7960296c153785a2e1f59dc14fe-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-556bea0d1e\" (UID: \"bfe7a7960296c153785a2e1f59dc14fe\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-556bea0d1e" May 17 00:28:25.269120 kubelet[2554]: I0517 00:28:25.269002 2554 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/766a9442c2c8e9b92d231aa58b7beac6-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-556bea0d1e\" (UID: \"766a9442c2c8e9b92d231aa58b7beac6\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-556bea0d1e" May 17 00:28:25.269120 kubelet[2554]: I0517 00:28:25.269018 2554 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/766a9442c2c8e9b92d231aa58b7beac6-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-556bea0d1e\" (UID: \"766a9442c2c8e9b92d231aa58b7beac6\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-556bea0d1e" May 17 00:28:25.269120 kubelet[2554]: I0517 00:28:25.269050 2554 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/de2281cec16c6cb5cc9477d37758fbb1-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-556bea0d1e\" (UID: \"de2281cec16c6cb5cc9477d37758fbb1\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-556bea0d1e" May 17 00:28:25.269229 kubelet[2554]: I0517 00:28:25.269065 2554 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bfe7a7960296c153785a2e1f59dc14fe-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-556bea0d1e\" (UID: \"bfe7a7960296c153785a2e1f59dc14fe\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-556bea0d1e" May 17 00:28:25.400190 kubelet[2554]: I0517 00:28:25.400168 2554 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-556bea0d1e" May 17 00:28:25.400496 kubelet[2554]: E0517 00:28:25.400463 2554 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://135.181.90.190:6443/api/v1/nodes\": dial tcp 135.181.90.190:6443: connect: connection refused" node="ci-4081-3-3-n-556bea0d1e" May 17 00:28:25.478082 containerd[1620]: time="2025-05-17T00:28:25.478046129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-556bea0d1e,Uid:bfe7a7960296c153785a2e1f59dc14fe,Namespace:kube-system,Attempt:0,}" May 17 00:28:25.482536 containerd[1620]: time="2025-05-17T00:28:25.482452945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-556bea0d1e,Uid:766a9442c2c8e9b92d231aa58b7beac6,Namespace:kube-system,Attempt:0,}" May 17 00:28:25.482743 containerd[1620]: time="2025-05-17T00:28:25.482457544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-556bea0d1e,Uid:de2281cec16c6cb5cc9477d37758fbb1,Namespace:kube-system,Attempt:0,}" May 17 00:28:25.660995 kubelet[2554]: E0517 00:28:25.660937 2554 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://135.181.90.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-556bea0d1e?timeout=10s\": dial tcp 135.181.90.190:6443: connect: connection refused" interval="800ms" May 17 00:28:25.802383 kubelet[2554]: I0517 00:28:25.802312 2554 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-556bea0d1e" May 17 00:28:25.802743 kubelet[2554]: E0517 00:28:25.802699 2554 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://135.181.90.190:6443/api/v1/nodes\": dial tcp 135.181.90.190:6443: connect: connection refused" node="ci-4081-3-3-n-556bea0d1e" May 17 00:28:25.931542 kubelet[2554]: W0517 00:28:25.930074 2554 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://135.181.90.190:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 135.181.90.190:6443: connect: connection refused May 17 00:28:25.931542 kubelet[2554]: E0517 00:28:25.931497 2554 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://135.181.90.190:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 135.181.90.190:6443: connect: connection refused" logger="UnhandledError" May 17 00:28:25.931476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1096794745.mount: Deactivated successfully. May 17 00:28:25.939835 containerd[1620]: time="2025-05-17T00:28:25.939798466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:28:25.940644 containerd[1620]: time="2025-05-17T00:28:25.940598052Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:28:25.941365 containerd[1620]: time="2025-05-17T00:28:25.941320853Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:28:25.941874 containerd[1620]: time="2025-05-17T00:28:25.941836172Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" May 17 00:28:25.942761 containerd[1620]: time="2025-05-17T00:28:25.942737289Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:28:25.943893 containerd[1620]: time="2025-05-17T00:28:25.943648564Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:28:25.943893 containerd[1620]: time="2025-05-17T00:28:25.943825747Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:28:25.946213 containerd[1620]: time="2025-05-17T00:28:25.946190340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:28:25.948049 containerd[1620]: time="2025-05-17T00:28:25.948013701Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 469.893542ms" May 17 00:28:25.949446 containerd[1620]: time="2025-05-17T00:28:25.949374294Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 466.866857ms" May 17 00:28:25.950886 containerd[1620]: time="2025-05-17T00:28:25.950851134Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 468.209964ms" May 17 00:28:26.040046 kubelet[2554]: W0517 00:28:26.038625 2554 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://135.181.90.190:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 135.181.90.190:6443: connect: connection refused May 17 00:28:26.040046 kubelet[2554]: E0517 00:28:26.038682 2554 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://135.181.90.190:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 135.181.90.190:6443: connect: connection refused" logger="UnhandledError" May 17 00:28:26.050375 containerd[1620]: time="2025-05-17T00:28:26.050190485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:28:26.050375 containerd[1620]: time="2025-05-17T00:28:26.050235480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:28:26.050375 containerd[1620]: time="2025-05-17T00:28:26.050248394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:28:26.050375 containerd[1620]: time="2025-05-17T00:28:26.050307204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:28:26.055340 containerd[1620]: time="2025-05-17T00:28:26.055151935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:28:26.055340 containerd[1620]: time="2025-05-17T00:28:26.055190106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:28:26.055340 containerd[1620]: time="2025-05-17T00:28:26.055199975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:28:26.055340 containerd[1620]: time="2025-05-17T00:28:26.055257833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:28:26.055340 containerd[1620]: time="2025-05-17T00:28:26.054417111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:28:26.055340 containerd[1620]: time="2025-05-17T00:28:26.054452668Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:28:26.055340 containerd[1620]: time="2025-05-17T00:28:26.054474289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:28:26.055340 containerd[1620]: time="2025-05-17T00:28:26.054544561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:28:26.116957 containerd[1620]: time="2025-05-17T00:28:26.116928520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-556bea0d1e,Uid:de2281cec16c6cb5cc9477d37758fbb1,Namespace:kube-system,Attempt:0,} returns sandbox id \"712b7800a5cb02bd1be3c75fa1291d840fe1488493de5be70579a21ffd39a57f\"" May 17 00:28:26.118457 containerd[1620]: time="2025-05-17T00:28:26.118432031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-556bea0d1e,Uid:bfe7a7960296c153785a2e1f59dc14fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"66a7df510bad28f0d809a9f1a894ec3afc8454bbce0e4c00c4244a6db3fa06e7\"" May 17 00:28:26.124230 containerd[1620]: time="2025-05-17T00:28:26.124210918Z" level=info msg="CreateContainer within sandbox \"66a7df510bad28f0d809a9f1a894ec3afc8454bbce0e4c00c4244a6db3fa06e7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:28:26.124442 containerd[1620]: time="2025-05-17T00:28:26.124291130Z" level=info msg="CreateContainer within sandbox \"712b7800a5cb02bd1be3c75fa1291d840fe1488493de5be70579a21ffd39a57f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:28:26.125571 containerd[1620]: time="2025-05-17T00:28:26.125545050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-556bea0d1e,Uid:766a9442c2c8e9b92d231aa58b7beac6,Namespace:kube-system,Attempt:0,} returns sandbox id \"107dee571c785fc1425fe945b6d67f4c0c90325285886ef7bd47e1fd96dcfa13\"" May 17 00:28:26.128195 containerd[1620]: time="2025-05-17T00:28:26.128178657Z" level=info msg="CreateContainer within sandbox \"107dee571c785fc1425fe945b6d67f4c0c90325285886ef7bd47e1fd96dcfa13\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:28:26.141595 containerd[1620]: time="2025-05-17T00:28:26.141543596Z" level=info msg="CreateContainer within sandbox \"712b7800a5cb02bd1be3c75fa1291d840fe1488493de5be70579a21ffd39a57f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7988a39059897c1d9b0ae14562848c2f6a7dde6f96d2fc3e0c78731cf122275a\"" May 17 00:28:26.142205 containerd[1620]: time="2025-05-17T00:28:26.142113578Z" level=info msg="StartContainer for \"7988a39059897c1d9b0ae14562848c2f6a7dde6f96d2fc3e0c78731cf122275a\"" May 17 00:28:26.143412 containerd[1620]: time="2025-05-17T00:28:26.143392837Z" level=info msg="CreateContainer within sandbox \"107dee571c785fc1425fe945b6d67f4c0c90325285886ef7bd47e1fd96dcfa13\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"15f521a7362867e160d8ff0e849f40e7dfbe980ac76ead613ed8ea6e39a97f31\"" May 17 00:28:26.144047 containerd[1620]: time="2025-05-17T00:28:26.144009417Z" level=info msg="StartContainer for \"15f521a7362867e160d8ff0e849f40e7dfbe980ac76ead613ed8ea6e39a97f31\"" May 17 00:28:26.144655 containerd[1620]: time="2025-05-17T00:28:26.144574802Z" level=info msg="CreateContainer within sandbox \"66a7df510bad28f0d809a9f1a894ec3afc8454bbce0e4c00c4244a6db3fa06e7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"82f6de6da9aed9d8a97f49aceec7c6277172a9c268b4b76380f2758cd90a5d8b\"" May 17 00:28:26.144883 containerd[1620]: time="2025-05-17T00:28:26.144868925Z" level=info msg="StartContainer for \"82f6de6da9aed9d8a97f49aceec7c6277172a9c268b4b76380f2758cd90a5d8b\"" May 17 00:28:26.220591 containerd[1620]: time="2025-05-17T00:28:26.220474012Z" level=info msg="StartContainer for \"82f6de6da9aed9d8a97f49aceec7c6277172a9c268b4b76380f2758cd90a5d8b\" returns successfully" May 17 00:28:26.220591 containerd[1620]: time="2025-05-17T00:28:26.220482397Z" level=info msg="StartContainer for \"7988a39059897c1d9b0ae14562848c2f6a7dde6f96d2fc3e0c78731cf122275a\" returns successfully" May 17 00:28:26.241490 containerd[1620]: time="2025-05-17T00:28:26.241019466Z" level=info msg="StartContainer for \"15f521a7362867e160d8ff0e849f40e7dfbe980ac76ead613ed8ea6e39a97f31\" returns successfully" May 17 00:28:26.300533 kubelet[2554]: W0517 00:28:26.300436 2554 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://135.181.90.190:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 135.181.90.190:6443: connect: connection refused May 17 00:28:26.300533 kubelet[2554]: E0517 00:28:26.300505 2554 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://135.181.90.190:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 135.181.90.190:6443: connect: connection refused" logger="UnhandledError" May 17 00:28:26.462813 kubelet[2554]: E0517 00:28:26.462763 2554 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://135.181.90.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-556bea0d1e?timeout=10s\": dial tcp 135.181.90.190:6443: connect: connection refused" interval="1.6s" May 17 00:28:26.482405 kubelet[2554]: W0517 00:28:26.482095 2554 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://135.181.90.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-556bea0d1e&limit=500&resourceVersion=0": dial tcp 135.181.90.190:6443: connect: connection refused May 17 00:28:26.482405 kubelet[2554]: E0517 00:28:26.482148 2554 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://135.181.90.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-556bea0d1e&limit=500&resourceVersion=0\": dial tcp 135.181.90.190:6443: connect: connection refused" logger="UnhandledError" May 17 00:28:26.605066 kubelet[2554]: I0517 00:28:26.604120 2554 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-556bea0d1e" May 17 00:28:27.691911 kubelet[2554]: I0517 00:28:27.689619 2554 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-3-n-556bea0d1e" May 17 00:28:27.691911 kubelet[2554]: E0517 00:28:27.689649 2554 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-3-n-556bea0d1e\": node \"ci-4081-3-3-n-556bea0d1e\" not found" May 17 00:28:27.702416 kubelet[2554]: E0517 00:28:27.702244 2554 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-556bea0d1e\" not found" May 17 00:28:27.802931 kubelet[2554]: E0517 00:28:27.802855 2554 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-556bea0d1e\" not found" May 17 00:28:27.904096 kubelet[2554]: E0517 00:28:27.904018 2554 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-556bea0d1e\" not found" May 17 00:28:28.004630 kubelet[2554]: E0517 00:28:28.004436 2554 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-556bea0d1e\" not found" May 17 00:28:28.105164 kubelet[2554]: E0517 00:28:28.105105 2554 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-556bea0d1e\" not found" May 17 00:28:28.205713 kubelet[2554]: E0517 00:28:28.205648 2554 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-556bea0d1e\" not found" May 17 00:28:28.306488 kubelet[2554]: E0517 00:28:28.306419 2554 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-556bea0d1e\" not found" May 17 00:28:28.407130 kubelet[2554]: E0517 00:28:28.407082 2554 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-556bea0d1e\" not found" May 17 00:28:28.507803 kubelet[2554]: E0517 00:28:28.507761 2554 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-556bea0d1e\" not found" May 17 00:28:28.608168 kubelet[2554]: E0517 00:28:28.608061 2554 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-556bea0d1e\" not found" May 17 00:28:28.708301 kubelet[2554]: E0517 00:28:28.708263 2554 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-556bea0d1e\" not found" May 17 00:28:28.808961 kubelet[2554]: E0517 00:28:28.808894 2554 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-556bea0d1e\" not found" May 17 00:28:28.910635 kubelet[2554]: E0517 00:28:28.910445 2554 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-556bea0d1e\" not found" May 17 00:28:29.011321 kubelet[2554]: E0517 00:28:29.011267 2554 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-556bea0d1e\" not found" May 17 00:28:29.111629 kubelet[2554]: E0517 00:28:29.111580 2554 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-556bea0d1e\" not found" May 17 00:28:29.212013 kubelet[2554]: E0517 00:28:29.211850 2554 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-556bea0d1e\" not found" May 17 00:28:29.312508 kubelet[2554]: E0517 00:28:29.312458 2554 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-556bea0d1e\" not found" May 17 00:28:29.413156 kubelet[2554]: E0517 00:28:29.413098 2554 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-556bea0d1e\" not found" May 17 00:28:29.897093 systemd[1]: Reloading requested from client PID 2828 ('systemctl') (unit session-7.scope)... May 17 00:28:29.897117 systemd[1]: Reloading... May 17 00:28:29.981045 zram_generator::config[2869]: No configuration found. May 17 00:28:30.048572 kubelet[2554]: I0517 00:28:30.047564 2554 apiserver.go:52] "Watching apiserver" May 17 00:28:30.058087 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:28:30.061079 kubelet[2554]: I0517 00:28:30.061042 2554 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:28:30.118909 systemd[1]: Reloading finished in 221 ms. May 17 00:28:30.146273 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:28:30.169724 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:28:30.169953 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:28:30.176272 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:28:30.269187 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:28:30.273205 (kubelet)[2929]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:28:30.315603 kubelet[2929]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:28:30.315603 kubelet[2929]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:28:30.315603 kubelet[2929]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:28:30.315603 kubelet[2929]: I0517 00:28:30.315373 2929 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:28:30.320892 kubelet[2929]: I0517 00:28:30.320877 2929 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:28:30.322046 kubelet[2929]: I0517 00:28:30.320959 2929 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:28:30.322046 kubelet[2929]: I0517 00:28:30.321131 2929 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:28:30.322124 kubelet[2929]: I0517 00:28:30.322105 2929 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:28:30.325196 kubelet[2929]: I0517 00:28:30.325108 2929 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:28:30.329234 kubelet[2929]: E0517 00:28:30.329210 2929 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:28:30.329234 kubelet[2929]: I0517 00:28:30.329234 2929 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:28:30.331234 kubelet[2929]: I0517 00:28:30.331218 2929 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:28:30.331469 kubelet[2929]: I0517 00:28:30.331454 2929 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:28:30.331546 kubelet[2929]: I0517 00:28:30.331524 2929 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:28:30.331658 kubelet[2929]: I0517 00:28:30.331542 2929 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-556bea0d1e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:28:30.331728 kubelet[2929]: I0517 00:28:30.331660 2929 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:28:30.331728 kubelet[2929]: I0517 00:28:30.331666 2929 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:28:30.331728 kubelet[2929]: I0517 00:28:30.331686 2929 state_mem.go:36] "Initialized new in-memory state store" May 17 00:28:30.331777 kubelet[2929]: I0517 00:28:30.331750 2929 kubelet.go:408] "Attempting to sync node with API server" May 17 00:28:30.331777 kubelet[2929]: I0517 00:28:30.331759 2929 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:28:30.331777 kubelet[2929]: I0517 00:28:30.331776 2929 kubelet.go:314] "Adding apiserver pod source" May 17 00:28:30.332473 kubelet[2929]: I0517 00:28:30.331787 2929 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:28:30.335301 kubelet[2929]: I0517 00:28:30.334842 2929 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:28:30.335301 kubelet[2929]: I0517 00:28:30.335158 2929 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:28:30.337254 kubelet[2929]: I0517 00:28:30.337243 2929 server.go:1274] "Started kubelet" May 17 00:28:30.344046 kubelet[2929]: I0517 00:28:30.341978 2929 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:28:30.344046 kubelet[2929]: I0517 00:28:30.343683 2929 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:28:30.344046 kubelet[2929]: I0517 00:28:30.343738 2929 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:28:30.344475 kubelet[2929]: I0517 00:28:30.344464 2929 server.go:449] "Adding debug handlers to kubelet server" May 17 00:28:30.345276 kubelet[2929]: I0517 00:28:30.345257 2929 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:28:30.345463 kubelet[2929]: I0517 00:28:30.345452 2929 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:28:30.347652 kubelet[2929]: I0517 00:28:30.347641 2929 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:28:30.348988 kubelet[2929]: I0517 00:28:30.348971 2929 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:28:30.349168 kubelet[2929]: I0517 00:28:30.349160 2929 reconciler.go:26] "Reconciler: start to sync state" May 17 00:28:30.353893 kubelet[2929]: E0517 00:28:30.353670 2929 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:28:30.354539 kubelet[2929]: I0517 00:28:30.354521 2929 factory.go:221] Registration of the systemd container factory successfully May 17 00:28:30.354610 kubelet[2929]: I0517 00:28:30.354591 2929 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:28:30.356150 kubelet[2929]: I0517 00:28:30.356133 2929 factory.go:221] Registration of the containerd container factory successfully May 17 00:28:30.359432 kubelet[2929]: I0517 00:28:30.359415 2929 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:28:30.360146 kubelet[2929]: I0517 00:28:30.360135 2929 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:28:30.360212 kubelet[2929]: I0517 00:28:30.360204 2929 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:28:30.360263 kubelet[2929]: I0517 00:28:30.360256 2929 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:28:30.360696 kubelet[2929]: E0517 00:28:30.360681 2929 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:28:30.404645 kubelet[2929]: I0517 00:28:30.404626 2929 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:28:30.404787 kubelet[2929]: I0517 00:28:30.404779 2929 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:28:30.404876 kubelet[2929]: I0517 00:28:30.404868 2929 state_mem.go:36] "Initialized new in-memory state store" May 17 00:28:30.405085 kubelet[2929]: I0517 00:28:30.405074 2929 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:28:30.405167 kubelet[2929]: I0517 00:28:30.405148 2929 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:28:30.405221 kubelet[2929]: I0517 00:28:30.405205 2929 policy_none.go:49] "None policy: Start" May 17 00:28:30.405732 kubelet[2929]: I0517 00:28:30.405723 2929 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:28:30.405858 kubelet[2929]: I0517 00:28:30.405851 2929 state_mem.go:35] "Initializing new in-memory state store" May 17 00:28:30.406049 kubelet[2929]: I0517 00:28:30.406040 2929 state_mem.go:75] "Updated machine memory state" May 17 00:28:30.406866 kubelet[2929]: I0517 00:28:30.406854 2929 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:28:30.407922 kubelet[2929]: I0517 00:28:30.407912 2929 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:28:30.408118 kubelet[2929]: I0517 00:28:30.408011 2929 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:28:30.408960 kubelet[2929]: I0517 00:28:30.408951 2929 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:28:30.515431 kubelet[2929]: I0517 00:28:30.515228 2929 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-556bea0d1e" May 17 00:28:30.526076 kubelet[2929]: I0517 00:28:30.525522 2929 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-3-n-556bea0d1e" May 17 00:28:30.526076 kubelet[2929]: I0517 00:28:30.525599 2929 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-3-n-556bea0d1e" May 17 00:28:30.550968 kubelet[2929]: I0517 00:28:30.550907 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bfe7a7960296c153785a2e1f59dc14fe-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-556bea0d1e\" (UID: \"bfe7a7960296c153785a2e1f59dc14fe\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-556bea0d1e" May 17 00:28:30.651795 kubelet[2929]: I0517 00:28:30.651765 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bfe7a7960296c153785a2e1f59dc14fe-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-556bea0d1e\" (UID: \"bfe7a7960296c153785a2e1f59dc14fe\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-556bea0d1e" May 17 00:28:30.651795 kubelet[2929]: I0517 00:28:30.651797 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/766a9442c2c8e9b92d231aa58b7beac6-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-556bea0d1e\" (UID: \"766a9442c2c8e9b92d231aa58b7beac6\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-556bea0d1e" May 17 00:28:30.651927 kubelet[2929]: I0517 00:28:30.651816 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/de2281cec16c6cb5cc9477d37758fbb1-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-556bea0d1e\" (UID: \"de2281cec16c6cb5cc9477d37758fbb1\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-556bea0d1e" May 17 00:28:30.651927 kubelet[2929]: I0517 00:28:30.651830 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/766a9442c2c8e9b92d231aa58b7beac6-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-556bea0d1e\" (UID: \"766a9442c2c8e9b92d231aa58b7beac6\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-556bea0d1e" May 17 00:28:30.651927 kubelet[2929]: I0517 00:28:30.651844 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/766a9442c2c8e9b92d231aa58b7beac6-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-556bea0d1e\" (UID: \"766a9442c2c8e9b92d231aa58b7beac6\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-556bea0d1e" May 17 00:28:30.651927 kubelet[2929]: I0517 00:28:30.651860 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/766a9442c2c8e9b92d231aa58b7beac6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-556bea0d1e\" (UID: \"766a9442c2c8e9b92d231aa58b7beac6\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-556bea0d1e" May 17 00:28:30.651927 kubelet[2929]: I0517 00:28:30.651899 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bfe7a7960296c153785a2e1f59dc14fe-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-556bea0d1e\" (UID: \"bfe7a7960296c153785a2e1f59dc14fe\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-556bea0d1e" May 17 00:28:30.652093 kubelet[2929]: I0517 00:28:30.651914 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/766a9442c2c8e9b92d231aa58b7beac6-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-556bea0d1e\" (UID: \"766a9442c2c8e9b92d231aa58b7beac6\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-556bea0d1e" May 17 00:28:31.335300 kubelet[2929]: I0517 00:28:31.334345 2929 apiserver.go:52] "Watching apiserver" May 17 00:28:31.349329 kubelet[2929]: I0517 00:28:31.349283 2929 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:28:31.400444 kubelet[2929]: E0517 00:28:31.400194 2929 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-3-n-556bea0d1e\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-3-n-556bea0d1e" May 17 00:28:31.421669 kubelet[2929]: I0517 00:28:31.421172 2929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-3-n-556bea0d1e" podStartSLOduration=1.421157623 podStartE2EDuration="1.421157623s" podCreationTimestamp="2025-05-17 00:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:28:31.420073543 +0000 UTC m=+1.142503857" watchObservedRunningTime="2025-05-17 00:28:31.421157623 +0000 UTC m=+1.143587937" May 17 00:28:31.430804 kubelet[2929]: I0517 00:28:31.429999 2929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-556bea0d1e" podStartSLOduration=1.429974513 podStartE2EDuration="1.429974513s" podCreationTimestamp="2025-05-17 00:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:28:31.429874184 +0000 UTC m=+1.152304498" watchObservedRunningTime="2025-05-17 00:28:31.429974513 +0000 UTC m=+1.152404828" May 17 00:28:31.439693 kubelet[2929]: I0517 00:28:31.439308 2929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-3-n-556bea0d1e" podStartSLOduration=1.439272828 podStartE2EDuration="1.439272828s" podCreationTimestamp="2025-05-17 00:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:28:31.439181377 +0000 UTC m=+1.161611691" watchObservedRunningTime="2025-05-17 00:28:31.439272828 +0000 UTC m=+1.161703143" May 17 00:28:36.300773 kubelet[2929]: I0517 00:28:36.300654 2929 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:28:36.304079 containerd[1620]: time="2025-05-17T00:28:36.303368995Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:28:36.305243 kubelet[2929]: I0517 00:28:36.303821 2929 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:28:36.992674 kubelet[2929]: I0517 00:28:36.992585 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a91b9493-5f72-419f-9339-4f304b72899f-xtables-lock\") pod \"kube-proxy-77t4s\" (UID: \"a91b9493-5f72-419f-9339-4f304b72899f\") " pod="kube-system/kube-proxy-77t4s" May 17 00:28:36.992852 kubelet[2929]: I0517 00:28:36.992689 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a91b9493-5f72-419f-9339-4f304b72899f-kube-proxy\") pod \"kube-proxy-77t4s\" (UID: \"a91b9493-5f72-419f-9339-4f304b72899f\") " pod="kube-system/kube-proxy-77t4s" May 17 00:28:36.992852 kubelet[2929]: I0517 00:28:36.992744 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a91b9493-5f72-419f-9339-4f304b72899f-lib-modules\") pod \"kube-proxy-77t4s\" (UID: \"a91b9493-5f72-419f-9339-4f304b72899f\") " pod="kube-system/kube-proxy-77t4s" May 17 00:28:36.992852 kubelet[2929]: I0517 00:28:36.992798 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mpjx\" (UniqueName: \"kubernetes.io/projected/a91b9493-5f72-419f-9339-4f304b72899f-kube-api-access-6mpjx\") pod \"kube-proxy-77t4s\" (UID: \"a91b9493-5f72-419f-9339-4f304b72899f\") " pod="kube-system/kube-proxy-77t4s" May 17 00:28:37.194506 kubelet[2929]: I0517 00:28:37.194460 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b45fg\" (UniqueName: \"kubernetes.io/projected/025ff01f-1b39-4fcd-aa2d-e4604132e570-kube-api-access-b45fg\") pod \"tigera-operator-7c5755cdcb-jzklk\" (UID: \"025ff01f-1b39-4fcd-aa2d-e4604132e570\") " pod="tigera-operator/tigera-operator-7c5755cdcb-jzklk" May 17 00:28:37.194506 kubelet[2929]: I0517 00:28:37.194491 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/025ff01f-1b39-4fcd-aa2d-e4604132e570-var-lib-calico\") pod \"tigera-operator-7c5755cdcb-jzklk\" (UID: \"025ff01f-1b39-4fcd-aa2d-e4604132e570\") " pod="tigera-operator/tigera-operator-7c5755cdcb-jzklk" May 17 00:28:37.235595 containerd[1620]: time="2025-05-17T00:28:37.235546664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-77t4s,Uid:a91b9493-5f72-419f-9339-4f304b72899f,Namespace:kube-system,Attempt:0,}" May 17 00:28:37.269241 containerd[1620]: time="2025-05-17T00:28:37.268982921Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:28:37.269241 containerd[1620]: time="2025-05-17T00:28:37.269119188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:28:37.269241 containerd[1620]: time="2025-05-17T00:28:37.269159152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:28:37.269753 containerd[1620]: time="2025-05-17T00:28:37.269626572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:28:37.293926 containerd[1620]: time="2025-05-17T00:28:37.293869369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-77t4s,Uid:a91b9493-5f72-419f-9339-4f304b72899f,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf68d24f139ef064f39af96f4892934cc3cb7f55e106ae968aec70491e5326a9\"" May 17 00:28:37.297312 containerd[1620]: time="2025-05-17T00:28:37.297233175Z" level=info msg="CreateContainer within sandbox \"bf68d24f139ef064f39af96f4892934cc3cb7f55e106ae968aec70491e5326a9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:28:37.312918 containerd[1620]: time="2025-05-17T00:28:37.312887167Z" level=info msg="CreateContainer within sandbox \"bf68d24f139ef064f39af96f4892934cc3cb7f55e106ae968aec70491e5326a9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"09ec6290c8953dfdd3e56a5db73edd9f91cb80cd7f82e341d009e8aca02eb35f\"" May 17 00:28:37.314130 containerd[1620]: time="2025-05-17T00:28:37.313911925Z" level=info msg="StartContainer for \"09ec6290c8953dfdd3e56a5db73edd9f91cb80cd7f82e341d009e8aca02eb35f\"" May 17 00:28:37.349575 containerd[1620]: time="2025-05-17T00:28:37.349543402Z" level=info msg="StartContainer for \"09ec6290c8953dfdd3e56a5db73edd9f91cb80cd7f82e341d009e8aca02eb35f\" returns successfully" May 17 00:28:37.442134 containerd[1620]: time="2025-05-17T00:28:37.442107217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-jzklk,Uid:025ff01f-1b39-4fcd-aa2d-e4604132e570,Namespace:tigera-operator,Attempt:0,}" May 17 00:28:37.459492 containerd[1620]: time="2025-05-17T00:28:37.459424868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:28:37.459597 containerd[1620]: time="2025-05-17T00:28:37.459507914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:28:37.459597 containerd[1620]: time="2025-05-17T00:28:37.459544584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:28:37.459675 containerd[1620]: time="2025-05-17T00:28:37.459634722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:28:37.502450 containerd[1620]: time="2025-05-17T00:28:37.502414834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-jzklk,Uid:025ff01f-1b39-4fcd-aa2d-e4604132e570,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"80464ddd3ee7dc3b9cde63a19b216ab7280ea5d7676dd9475dcc2068abc59f0a\"" May 17 00:28:37.503821 containerd[1620]: time="2025-05-17T00:28:37.503791102Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 17 00:28:37.549721 kubelet[2929]: I0517 00:28:37.549369 2929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-77t4s" podStartSLOduration=1.549354238 podStartE2EDuration="1.549354238s" podCreationTimestamp="2025-05-17 00:28:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:28:37.415921997 +0000 UTC m=+7.138352312" watchObservedRunningTime="2025-05-17 00:28:37.549354238 +0000 UTC m=+7.271784554" May 17 00:28:39.197953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount856180445.mount: Deactivated successfully. May 17 00:28:39.549003 containerd[1620]: time="2025-05-17T00:28:39.548940154Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:39.549860 containerd[1620]: time="2025-05-17T00:28:39.549823816Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 17 00:28:39.550755 containerd[1620]: time="2025-05-17T00:28:39.550719240Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:39.552375 containerd[1620]: time="2025-05-17T00:28:39.552358012Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:39.553294 containerd[1620]: time="2025-05-17T00:28:39.552874192Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 2.049053663s" May 17 00:28:39.553294 containerd[1620]: time="2025-05-17T00:28:39.552906684Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 17 00:28:39.554686 containerd[1620]: time="2025-05-17T00:28:39.554659189Z" level=info msg="CreateContainer within sandbox \"80464ddd3ee7dc3b9cde63a19b216ab7280ea5d7676dd9475dcc2068abc59f0a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 17 00:28:39.569786 containerd[1620]: time="2025-05-17T00:28:39.569729530Z" level=info msg="CreateContainer within sandbox \"80464ddd3ee7dc3b9cde63a19b216ab7280ea5d7676dd9475dcc2068abc59f0a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"240b133e79ca4f782053f034b72be079bfc85cc3f595a08a37d0af20fd20d72a\"" May 17 00:28:39.570186 containerd[1620]: time="2025-05-17T00:28:39.570159048Z" level=info msg="StartContainer for \"240b133e79ca4f782053f034b72be079bfc85cc3f595a08a37d0af20fd20d72a\"" May 17 00:28:39.608801 containerd[1620]: time="2025-05-17T00:28:39.608725565Z" level=info msg="StartContainer for \"240b133e79ca4f782053f034b72be079bfc85cc3f595a08a37d0af20fd20d72a\" returns successfully" May 17 00:28:40.784777 kubelet[2929]: I0517 00:28:40.784250 2929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7c5755cdcb-jzklk" podStartSLOduration=1.7340723919999999 podStartE2EDuration="3.784234059s" podCreationTimestamp="2025-05-17 00:28:37 +0000 UTC" firstStartedPulling="2025-05-17 00:28:37.50336401 +0000 UTC m=+7.225794325" lastFinishedPulling="2025-05-17 00:28:39.553525677 +0000 UTC m=+9.275955992" observedRunningTime="2025-05-17 00:28:40.428462707 +0000 UTC m=+10.150893032" watchObservedRunningTime="2025-05-17 00:28:40.784234059 +0000 UTC m=+10.506664373" May 17 00:28:45.151694 sudo[2068]: pam_unix(sudo:session): session closed for user root May 17 00:28:45.313133 sshd[2064]: pam_unix(sshd:session): session closed for user core May 17 00:28:45.321295 systemd-logind[1593]: Session 7 logged out. Waiting for processes to exit. May 17 00:28:45.324563 systemd[1]: sshd@6-135.181.90.190:22-139.178.89.65:42952.service: Deactivated successfully. May 17 00:28:45.330673 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:28:45.336221 systemd-logind[1593]: Removed session 7. May 17 00:28:47.566997 kubelet[2929]: I0517 00:28:47.566959 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlbmw\" (UniqueName: \"kubernetes.io/projected/9ccabc3e-c858-4aa5-a30a-60ca31d44aec-kube-api-access-zlbmw\") pod \"calico-typha-69b6d58d67-8w2km\" (UID: \"9ccabc3e-c858-4aa5-a30a-60ca31d44aec\") " pod="calico-system/calico-typha-69b6d58d67-8w2km" May 17 00:28:47.566997 kubelet[2929]: I0517 00:28:47.566997 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ccabc3e-c858-4aa5-a30a-60ca31d44aec-tigera-ca-bundle\") pod \"calico-typha-69b6d58d67-8w2km\" (UID: \"9ccabc3e-c858-4aa5-a30a-60ca31d44aec\") " pod="calico-system/calico-typha-69b6d58d67-8w2km" May 17 00:28:47.567400 kubelet[2929]: I0517 00:28:47.567013 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9ccabc3e-c858-4aa5-a30a-60ca31d44aec-typha-certs\") pod \"calico-typha-69b6d58d67-8w2km\" (UID: \"9ccabc3e-c858-4aa5-a30a-60ca31d44aec\") " pod="calico-system/calico-typha-69b6d58d67-8w2km" May 17 00:28:47.768798 kubelet[2929]: I0517 00:28:47.768756 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2ac4d5e-7726-4c62-8451-832e05281ef4-lib-modules\") pod \"calico-node-mfnx9\" (UID: \"e2ac4d5e-7726-4c62-8451-832e05281ef4\") " pod="calico-system/calico-node-mfnx9" May 17 00:28:47.768798 kubelet[2929]: I0517 00:28:47.768796 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e2ac4d5e-7726-4c62-8451-832e05281ef4-cni-log-dir\") pod \"calico-node-mfnx9\" (UID: \"e2ac4d5e-7726-4c62-8451-832e05281ef4\") " pod="calico-system/calico-node-mfnx9" May 17 00:28:47.768798 kubelet[2929]: I0517 00:28:47.768810 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e2ac4d5e-7726-4c62-8451-832e05281ef4-node-certs\") pod \"calico-node-mfnx9\" (UID: \"e2ac4d5e-7726-4c62-8451-832e05281ef4\") " pod="calico-system/calico-node-mfnx9" May 17 00:28:47.768976 kubelet[2929]: I0517 00:28:47.768825 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e2ac4d5e-7726-4c62-8451-832e05281ef4-cni-bin-dir\") pod \"calico-node-mfnx9\" (UID: \"e2ac4d5e-7726-4c62-8451-832e05281ef4\") " pod="calico-system/calico-node-mfnx9" May 17 00:28:47.768976 kubelet[2929]: I0517 00:28:47.768839 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2ac4d5e-7726-4c62-8451-832e05281ef4-tigera-ca-bundle\") pod \"calico-node-mfnx9\" (UID: \"e2ac4d5e-7726-4c62-8451-832e05281ef4\") " pod="calico-system/calico-node-mfnx9" May 17 00:28:47.768976 kubelet[2929]: I0517 00:28:47.768854 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e2ac4d5e-7726-4c62-8451-832e05281ef4-var-run-calico\") pod \"calico-node-mfnx9\" (UID: \"e2ac4d5e-7726-4c62-8451-832e05281ef4\") " pod="calico-system/calico-node-mfnx9" May 17 00:28:47.768976 kubelet[2929]: I0517 00:28:47.768866 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e2ac4d5e-7726-4c62-8451-832e05281ef4-flexvol-driver-host\") pod \"calico-node-mfnx9\" (UID: \"e2ac4d5e-7726-4c62-8451-832e05281ef4\") " pod="calico-system/calico-node-mfnx9" May 17 00:28:47.768976 kubelet[2929]: I0517 00:28:47.768880 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e2ac4d5e-7726-4c62-8451-832e05281ef4-policysync\") pod \"calico-node-mfnx9\" (UID: \"e2ac4d5e-7726-4c62-8451-832e05281ef4\") " pod="calico-system/calico-node-mfnx9" May 17 00:28:47.769089 kubelet[2929]: I0517 00:28:47.768892 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jkks\" (UniqueName: \"kubernetes.io/projected/e2ac4d5e-7726-4c62-8451-832e05281ef4-kube-api-access-7jkks\") pod \"calico-node-mfnx9\" (UID: \"e2ac4d5e-7726-4c62-8451-832e05281ef4\") " pod="calico-system/calico-node-mfnx9" May 17 00:28:47.769089 kubelet[2929]: I0517 00:28:47.768904 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e2ac4d5e-7726-4c62-8451-832e05281ef4-cni-net-dir\") pod \"calico-node-mfnx9\" (UID: \"e2ac4d5e-7726-4c62-8451-832e05281ef4\") " pod="calico-system/calico-node-mfnx9" May 17 00:28:47.769089 kubelet[2929]: I0517 00:28:47.768915 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2ac4d5e-7726-4c62-8451-832e05281ef4-xtables-lock\") pod \"calico-node-mfnx9\" (UID: \"e2ac4d5e-7726-4c62-8451-832e05281ef4\") " pod="calico-system/calico-node-mfnx9" May 17 00:28:47.769089 kubelet[2929]: I0517 00:28:47.768927 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e2ac4d5e-7726-4c62-8451-832e05281ef4-var-lib-calico\") pod \"calico-node-mfnx9\" (UID: \"e2ac4d5e-7726-4c62-8451-832e05281ef4\") " pod="calico-system/calico-node-mfnx9" May 17 00:28:47.874427 containerd[1620]: time="2025-05-17T00:28:47.874161600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69b6d58d67-8w2km,Uid:9ccabc3e-c858-4aa5-a30a-60ca31d44aec,Namespace:calico-system,Attempt:0,}" May 17 00:28:47.884767 kubelet[2929]: E0517 00:28:47.884540 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:47.884767 kubelet[2929]: W0517 00:28:47.884569 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:47.884767 kubelet[2929]: E0517 00:28:47.884590 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:47.909975 containerd[1620]: time="2025-05-17T00:28:47.909880826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:28:47.909975 containerd[1620]: time="2025-05-17T00:28:47.909931050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:28:47.910647 containerd[1620]: time="2025-05-17T00:28:47.910404830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:28:47.910853 containerd[1620]: time="2025-05-17T00:28:47.910808900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:28:47.968337 containerd[1620]: time="2025-05-17T00:28:47.968306484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69b6d58d67-8w2km,Uid:9ccabc3e-c858-4aa5-a30a-60ca31d44aec,Namespace:calico-system,Attempt:0,} returns sandbox id \"f52c0d303fc8bea67b4d0feecd41e86784a77453bfa1a9fabda917bb9e9b0d94\"" May 17 00:28:47.972374 containerd[1620]: time="2025-05-17T00:28:47.972323405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 17 00:28:48.038428 kubelet[2929]: E0517 00:28:48.038375 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jv7dj" podUID="9def7614-7d88-42d1-ba99-91a4539e16ec" May 17 00:28:48.045048 containerd[1620]: time="2025-05-17T00:28:48.043304102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mfnx9,Uid:e2ac4d5e-7726-4c62-8451-832e05281ef4,Namespace:calico-system,Attempt:0,}" May 17 00:28:48.062490 kubelet[2929]: E0517 00:28:48.062459 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.062619 kubelet[2929]: W0517 00:28:48.062584 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.062619 kubelet[2929]: E0517 00:28:48.062607 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.063452 kubelet[2929]: E0517 00:28:48.062968 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.063452 kubelet[2929]: W0517 00:28:48.062980 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.063452 kubelet[2929]: E0517 00:28:48.062991 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.063452 kubelet[2929]: E0517 00:28:48.063305 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.063452 kubelet[2929]: W0517 00:28:48.063313 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.063452 kubelet[2929]: E0517 00:28:48.063322 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.064087 kubelet[2929]: E0517 00:28:48.063621 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.064087 kubelet[2929]: W0517 00:28:48.063629 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.064087 kubelet[2929]: E0517 00:28:48.063637 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.064177 kubelet[2929]: E0517 00:28:48.064091 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.064177 kubelet[2929]: W0517 00:28:48.064099 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.064177 kubelet[2929]: E0517 00:28:48.064107 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.066390 kubelet[2929]: E0517 00:28:48.066376 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.066390 kubelet[2929]: W0517 00:28:48.066388 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.066493 kubelet[2929]: E0517 00:28:48.066399 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.066555 kubelet[2929]: E0517 00:28:48.066525 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.066555 kubelet[2929]: W0517 00:28:48.066534 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.066555 kubelet[2929]: E0517 00:28:48.066542 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.066692 kubelet[2929]: E0517 00:28:48.066662 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.066692 kubelet[2929]: W0517 00:28:48.066670 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.066692 kubelet[2929]: E0517 00:28:48.066677 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.068437 kubelet[2929]: E0517 00:28:48.067289 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.068437 kubelet[2929]: W0517 00:28:48.067298 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.068437 kubelet[2929]: E0517 00:28:48.067307 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.068437 kubelet[2929]: E0517 00:28:48.067439 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.068437 kubelet[2929]: W0517 00:28:48.067446 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.068437 kubelet[2929]: E0517 00:28:48.067452 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.068437 kubelet[2929]: E0517 00:28:48.067816 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.068437 kubelet[2929]: W0517 00:28:48.067824 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.068437 kubelet[2929]: E0517 00:28:48.067831 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.068437 kubelet[2929]: E0517 00:28:48.068045 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.068671 kubelet[2929]: W0517 00:28:48.068053 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.068671 kubelet[2929]: E0517 00:28:48.068062 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.068810 kubelet[2929]: E0517 00:28:48.068791 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.068810 kubelet[2929]: W0517 00:28:48.068804 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.068810 kubelet[2929]: E0517 00:28:48.068812 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.068984 kubelet[2929]: E0517 00:28:48.068933 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.068984 kubelet[2929]: W0517 00:28:48.068960 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.068984 kubelet[2929]: E0517 00:28:48.068968 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.069400 kubelet[2929]: E0517 00:28:48.069377 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.069400 kubelet[2929]: W0517 00:28:48.069394 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.069400 kubelet[2929]: E0517 00:28:48.069403 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.070148 kubelet[2929]: E0517 00:28:48.070109 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.070148 kubelet[2929]: W0517 00:28:48.070125 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.070148 kubelet[2929]: E0517 00:28:48.070133 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.071069 kubelet[2929]: E0517 00:28:48.070989 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.071327 kubelet[2929]: W0517 00:28:48.071304 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.071545 kubelet[2929]: E0517 00:28:48.071433 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.072324 kubelet[2929]: E0517 00:28:48.072313 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.072525 kubelet[2929]: W0517 00:28:48.072383 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.072525 kubelet[2929]: E0517 00:28:48.072401 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.072974 kubelet[2929]: E0517 00:28:48.072865 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.072974 kubelet[2929]: W0517 00:28:48.072875 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.072974 kubelet[2929]: E0517 00:28:48.072884 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.074054 kubelet[2929]: E0517 00:28:48.073910 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.074054 kubelet[2929]: W0517 00:28:48.073921 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.074054 kubelet[2929]: E0517 00:28:48.073929 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.075223 kubelet[2929]: E0517 00:28:48.075055 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.075223 kubelet[2929]: W0517 00:28:48.075067 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.075223 kubelet[2929]: E0517 00:28:48.075077 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.075223 kubelet[2929]: I0517 00:28:48.075218 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9def7614-7d88-42d1-ba99-91a4539e16ec-registration-dir\") pod \"csi-node-driver-jv7dj\" (UID: \"9def7614-7d88-42d1-ba99-91a4539e16ec\") " pod="calico-system/csi-node-driver-jv7dj" May 17 00:28:48.077038 kubelet[2929]: E0517 00:28:48.075796 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.077038 kubelet[2929]: W0517 00:28:48.075809 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.077038 kubelet[2929]: E0517 00:28:48.075976 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.077038 kubelet[2929]: I0517 00:28:48.075996 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9def7614-7d88-42d1-ba99-91a4539e16ec-socket-dir\") pod \"csi-node-driver-jv7dj\" (UID: \"9def7614-7d88-42d1-ba99-91a4539e16ec\") " pod="calico-system/csi-node-driver-jv7dj" May 17 00:28:48.077038 kubelet[2929]: E0517 00:28:48.076156 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.077038 kubelet[2929]: W0517 00:28:48.076164 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.077038 kubelet[2929]: E0517 00:28:48.076174 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.077038 kubelet[2929]: E0517 00:28:48.076437 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.077038 kubelet[2929]: W0517 00:28:48.076445 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.077264 kubelet[2929]: E0517 00:28:48.076463 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.077264 kubelet[2929]: E0517 00:28:48.076695 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.077264 kubelet[2929]: W0517 00:28:48.076714 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.077264 kubelet[2929]: E0517 00:28:48.076732 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.077264 kubelet[2929]: I0517 00:28:48.076747 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9def7614-7d88-42d1-ba99-91a4539e16ec-varrun\") pod \"csi-node-driver-jv7dj\" (UID: \"9def7614-7d88-42d1-ba99-91a4539e16ec\") " pod="calico-system/csi-node-driver-jv7dj" May 17 00:28:48.077264 kubelet[2929]: E0517 00:28:48.076915 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.077264 kubelet[2929]: W0517 00:28:48.076923 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.077264 kubelet[2929]: E0517 00:28:48.076933 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.077392 kubelet[2929]: I0517 00:28:48.076956 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjb7d\" (UniqueName: \"kubernetes.io/projected/9def7614-7d88-42d1-ba99-91a4539e16ec-kube-api-access-sjb7d\") pod \"csi-node-driver-jv7dj\" (UID: \"9def7614-7d88-42d1-ba99-91a4539e16ec\") " pod="calico-system/csi-node-driver-jv7dj" May 17 00:28:48.077392 kubelet[2929]: E0517 00:28:48.077317 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.077392 kubelet[2929]: W0517 00:28:48.077326 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.077392 kubelet[2929]: E0517 00:28:48.077337 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.077706 kubelet[2929]: E0517 00:28:48.077684 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.077706 kubelet[2929]: W0517 00:28:48.077698 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.077817 kubelet[2929]: E0517 00:28:48.077798 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.078515 kubelet[2929]: E0517 00:28:48.078102 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.078515 kubelet[2929]: W0517 00:28:48.078212 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.078515 kubelet[2929]: E0517 00:28:48.078233 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.078515 kubelet[2929]: E0517 00:28:48.078461 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.078515 kubelet[2929]: W0517 00:28:48.078469 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.078651 kubelet[2929]: E0517 00:28:48.078613 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.079056 kubelet[2929]: E0517 00:28:48.078780 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.079056 kubelet[2929]: W0517 00:28:48.078791 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.079056 kubelet[2929]: E0517 00:28:48.078975 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.079056 kubelet[2929]: I0517 00:28:48.078992 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9def7614-7d88-42d1-ba99-91a4539e16ec-kubelet-dir\") pod \"csi-node-driver-jv7dj\" (UID: \"9def7614-7d88-42d1-ba99-91a4539e16ec\") " pod="calico-system/csi-node-driver-jv7dj" May 17 00:28:48.080127 kubelet[2929]: E0517 00:28:48.079249 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.080127 kubelet[2929]: W0517 00:28:48.079265 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.080127 kubelet[2929]: E0517 00:28:48.079287 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.080127 kubelet[2929]: E0517 00:28:48.079656 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.080127 kubelet[2929]: W0517 00:28:48.079665 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.080127 kubelet[2929]: E0517 00:28:48.079676 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.080255 kubelet[2929]: E0517 00:28:48.080211 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.080255 kubelet[2929]: W0517 00:28:48.080220 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.080255 kubelet[2929]: E0517 00:28:48.080228 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.080602 kubelet[2929]: E0517 00:28:48.080575 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.080602 kubelet[2929]: W0517 00:28:48.080590 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.080602 kubelet[2929]: E0517 00:28:48.080598 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.083541 containerd[1620]: time="2025-05-17T00:28:48.082464873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:28:48.083541 containerd[1620]: time="2025-05-17T00:28:48.082534144Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:28:48.083541 containerd[1620]: time="2025-05-17T00:28:48.082553520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:28:48.083541 containerd[1620]: time="2025-05-17T00:28:48.082677653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:28:48.128663 containerd[1620]: time="2025-05-17T00:28:48.128540603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mfnx9,Uid:e2ac4d5e-7726-4c62-8451-832e05281ef4,Namespace:calico-system,Attempt:0,} returns sandbox id \"5509f790450feda8c14f8c845e32aaed955964b368a99ca169b8eb4acab7219a\"" May 17 00:28:48.180388 kubelet[2929]: E0517 00:28:48.180345 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.180388 kubelet[2929]: W0517 00:28:48.180375 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.180522 kubelet[2929]: E0517 00:28:48.180421 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.180795 kubelet[2929]: E0517 00:28:48.180768 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.180795 kubelet[2929]: W0517 00:28:48.180789 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.180865 kubelet[2929]: E0517 00:28:48.180824 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.181206 kubelet[2929]: E0517 00:28:48.181180 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.181206 kubelet[2929]: W0517 00:28:48.181197 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.181275 kubelet[2929]: E0517 00:28:48.181242 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.181537 kubelet[2929]: E0517 00:28:48.181510 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.181537 kubelet[2929]: W0517 00:28:48.181530 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.181615 kubelet[2929]: E0517 00:28:48.181548 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.181803 kubelet[2929]: E0517 00:28:48.181783 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.181803 kubelet[2929]: W0517 00:28:48.181796 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.181854 kubelet[2929]: E0517 00:28:48.181846 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.182059 kubelet[2929]: E0517 00:28:48.182039 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.182059 kubelet[2929]: W0517 00:28:48.182051 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.182120 kubelet[2929]: E0517 00:28:48.182098 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.182284 kubelet[2929]: E0517 00:28:48.182265 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.182284 kubelet[2929]: W0517 00:28:48.182277 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.182423 kubelet[2929]: E0517 00:28:48.182344 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.182452 kubelet[2929]: E0517 00:28:48.182432 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.182452 kubelet[2929]: W0517 00:28:48.182438 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.182452 kubelet[2929]: E0517 00:28:48.182447 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.182644 kubelet[2929]: E0517 00:28:48.182625 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.182644 kubelet[2929]: W0517 00:28:48.182637 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.182712 kubelet[2929]: E0517 00:28:48.182650 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.182824 kubelet[2929]: E0517 00:28:48.182803 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.182824 kubelet[2929]: W0517 00:28:48.182816 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.182884 kubelet[2929]: E0517 00:28:48.182843 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.183105 kubelet[2929]: E0517 00:28:48.183084 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.183105 kubelet[2929]: W0517 00:28:48.183098 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.183341 kubelet[2929]: E0517 00:28:48.183174 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.183377 kubelet[2929]: E0517 00:28:48.183364 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.183377 kubelet[2929]: W0517 00:28:48.183371 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.183461 kubelet[2929]: E0517 00:28:48.183440 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.183590 kubelet[2929]: E0517 00:28:48.183569 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.183590 kubelet[2929]: W0517 00:28:48.183582 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.183703 kubelet[2929]: E0517 00:28:48.183653 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.183739 kubelet[2929]: E0517 00:28:48.183724 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.183739 kubelet[2929]: W0517 00:28:48.183731 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.183840 kubelet[2929]: E0517 00:28:48.183790 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.183871 kubelet[2929]: E0517 00:28:48.183862 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.183871 kubelet[2929]: W0517 00:28:48.183868 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.184001 kubelet[2929]: E0517 00:28:48.183946 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.184069 kubelet[2929]: E0517 00:28:48.184051 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.184069 kubelet[2929]: W0517 00:28:48.184058 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.184110 kubelet[2929]: E0517 00:28:48.184074 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.184252 kubelet[2929]: E0517 00:28:48.184231 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.184252 kubelet[2929]: W0517 00:28:48.184244 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.184304 kubelet[2929]: E0517 00:28:48.184262 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.184476 kubelet[2929]: E0517 00:28:48.184455 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.184476 kubelet[2929]: W0517 00:28:48.184469 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.184526 kubelet[2929]: E0517 00:28:48.184486 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.184667 kubelet[2929]: E0517 00:28:48.184646 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.184667 kubelet[2929]: W0517 00:28:48.184659 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.184667 kubelet[2929]: E0517 00:28:48.184669 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.184848 kubelet[2929]: E0517 00:28:48.184830 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.184848 kubelet[2929]: W0517 00:28:48.184842 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.184891 kubelet[2929]: E0517 00:28:48.184852 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.185110 kubelet[2929]: E0517 00:28:48.185089 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.185110 kubelet[2929]: W0517 00:28:48.185102 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.185252 kubelet[2929]: E0517 00:28:48.185228 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.185835 kubelet[2929]: E0517 00:28:48.185442 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.185835 kubelet[2929]: W0517 00:28:48.185452 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.185835 kubelet[2929]: E0517 00:28:48.185500 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.185835 kubelet[2929]: E0517 00:28:48.185587 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.185835 kubelet[2929]: W0517 00:28:48.185593 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.185835 kubelet[2929]: E0517 00:28:48.185602 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.185835 kubelet[2929]: E0517 00:28:48.185768 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.185835 kubelet[2929]: W0517 00:28:48.185774 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.185835 kubelet[2929]: E0517 00:28:48.185781 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.186473 kubelet[2929]: E0517 00:28:48.186346 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.186473 kubelet[2929]: W0517 00:28:48.186359 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.186473 kubelet[2929]: E0517 00:28:48.186368 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:48.193518 kubelet[2929]: E0517 00:28:48.193490 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:48.193518 kubelet[2929]: W0517 00:28:48.193514 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:48.193585 kubelet[2929]: E0517 00:28:48.193533 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:49.897890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3489997411.mount: Deactivated successfully. May 17 00:28:50.373803 kubelet[2929]: E0517 00:28:50.373748 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jv7dj" podUID="9def7614-7d88-42d1-ba99-91a4539e16ec" May 17 00:28:50.764723 containerd[1620]: time="2025-05-17T00:28:50.764616278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:50.765708 containerd[1620]: time="2025-05-17T00:28:50.765664748Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 17 00:28:50.766565 containerd[1620]: time="2025-05-17T00:28:50.766527520Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:50.768310 containerd[1620]: time="2025-05-17T00:28:50.768273131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:50.768997 containerd[1620]: time="2025-05-17T00:28:50.768684104Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 2.796332837s" May 17 00:28:50.768997 containerd[1620]: time="2025-05-17T00:28:50.768715332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 17 00:28:50.770125 containerd[1620]: time="2025-05-17T00:28:50.770108881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 00:28:50.779514 containerd[1620]: time="2025-05-17T00:28:50.779482138Z" level=info msg="CreateContainer within sandbox \"f52c0d303fc8bea67b4d0feecd41e86784a77453bfa1a9fabda917bb9e9b0d94\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 17 00:28:50.789280 containerd[1620]: time="2025-05-17T00:28:50.789248415Z" level=info msg="CreateContainer within sandbox \"f52c0d303fc8bea67b4d0feecd41e86784a77453bfa1a9fabda917bb9e9b0d94\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d971f63218ddb0cf133a5568fa279b03bcb0a734117d35b3ae9176cb24ac99ca\"" May 17 00:28:50.789982 containerd[1620]: time="2025-05-17T00:28:50.789654047Z" level=info msg="StartContainer for \"d971f63218ddb0cf133a5568fa279b03bcb0a734117d35b3ae9176cb24ac99ca\"" May 17 00:28:50.842659 containerd[1620]: time="2025-05-17T00:28:50.842573744Z" level=info msg="StartContainer for \"d971f63218ddb0cf133a5568fa279b03bcb0a734117d35b3ae9176cb24ac99ca\" returns successfully" May 17 00:28:51.499223 kubelet[2929]: E0517 00:28:51.499186 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.499223 kubelet[2929]: W0517 00:28:51.499213 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.499573 kubelet[2929]: E0517 00:28:51.499236 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:51.499573 kubelet[2929]: E0517 00:28:51.499493 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.499573 kubelet[2929]: W0517 00:28:51.499503 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.499573 kubelet[2929]: E0517 00:28:51.499514 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:51.499689 kubelet[2929]: E0517 00:28:51.499664 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.499689 kubelet[2929]: W0517 00:28:51.499681 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.499768 kubelet[2929]: E0517 00:28:51.499692 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:51.499912 kubelet[2929]: E0517 00:28:51.499878 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.499912 kubelet[2929]: W0517 00:28:51.499897 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.499912 kubelet[2929]: E0517 00:28:51.499912 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:51.500236 kubelet[2929]: E0517 00:28:51.500124 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.500236 kubelet[2929]: W0517 00:28:51.500132 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.500236 kubelet[2929]: E0517 00:28:51.500140 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:51.500426 kubelet[2929]: E0517 00:28:51.500255 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.500426 kubelet[2929]: W0517 00:28:51.500262 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.500426 kubelet[2929]: E0517 00:28:51.500268 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:51.500426 kubelet[2929]: E0517 00:28:51.500373 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.500426 kubelet[2929]: W0517 00:28:51.500379 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.500426 kubelet[2929]: E0517 00:28:51.500385 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:51.501127 kubelet[2929]: E0517 00:28:51.500490 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.501127 kubelet[2929]: W0517 00:28:51.500497 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.501127 kubelet[2929]: E0517 00:28:51.500503 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:51.501127 kubelet[2929]: E0517 00:28:51.500623 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.501127 kubelet[2929]: W0517 00:28:51.500629 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.501127 kubelet[2929]: E0517 00:28:51.500635 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:51.501127 kubelet[2929]: E0517 00:28:51.500788 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.501127 kubelet[2929]: W0517 00:28:51.500801 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.501127 kubelet[2929]: E0517 00:28:51.500814 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:51.502341 kubelet[2929]: E0517 00:28:51.501848 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.502341 kubelet[2929]: W0517 00:28:51.501862 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.502341 kubelet[2929]: E0517 00:28:51.501874 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:51.502341 kubelet[2929]: E0517 00:28:51.502244 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.502341 kubelet[2929]: W0517 00:28:51.502255 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.502341 kubelet[2929]: E0517 00:28:51.502266 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:51.502341 kubelet[2929]: I0517 00:28:51.502305 2929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-69b6d58d67-8w2km" podStartSLOduration=1.70503457 podStartE2EDuration="4.502296072s" podCreationTimestamp="2025-05-17 00:28:47 +0000 UTC" firstStartedPulling="2025-05-17 00:28:47.972107409 +0000 UTC m=+17.694537734" lastFinishedPulling="2025-05-17 00:28:50.76936892 +0000 UTC m=+20.491799236" observedRunningTime="2025-05-17 00:28:51.50210306 +0000 UTC m=+21.224533375" watchObservedRunningTime="2025-05-17 00:28:51.502296072 +0000 UTC m=+21.224726387" May 17 00:28:51.503280 kubelet[2929]: E0517 00:28:51.502766 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.503280 kubelet[2929]: W0517 00:28:51.502779 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.503280 kubelet[2929]: E0517 00:28:51.502790 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:51.503873 kubelet[2929]: E0517 00:28:51.503602 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.503873 kubelet[2929]: W0517 00:28:51.503615 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.503873 kubelet[2929]: E0517 00:28:51.503627 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:51.504610 kubelet[2929]: E0517 00:28:51.504295 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.504610 kubelet[2929]: W0517 00:28:51.504305 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.504610 kubelet[2929]: E0517 00:28:51.504313 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:51.504861 kubelet[2929]: E0517 00:28:51.504795 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.504861 kubelet[2929]: W0517 00:28:51.504804 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.504861 kubelet[2929]: E0517 00:28:51.504812 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:51.505250 kubelet[2929]: E0517 00:28:51.505156 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.505250 kubelet[2929]: W0517 00:28:51.505167 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.505250 kubelet[2929]: E0517 00:28:51.505188 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:51.505765 kubelet[2929]: E0517 00:28:51.505663 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.505765 kubelet[2929]: W0517 00:28:51.505673 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.505765 kubelet[2929]: E0517 00:28:51.505694 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:51.505950 kubelet[2929]: E0517 00:28:51.505916 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.505950 kubelet[2929]: W0517 00:28:51.505943 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.505999 kubelet[2929]: E0517 00:28:51.505959 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:51.506606 kubelet[2929]: E0517 00:28:51.506142 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.506606 kubelet[2929]: W0517 00:28:51.506153 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.506606 kubelet[2929]: E0517 00:28:51.506169 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:51.507107 kubelet[2929]: E0517 00:28:51.506742 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.507107 kubelet[2929]: W0517 00:28:51.506750 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.507107 kubelet[2929]: E0517 00:28:51.507070 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:51.507107 kubelet[2929]: E0517 00:28:51.507095 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.507107 kubelet[2929]: W0517 00:28:51.507105 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.507215 kubelet[2929]: E0517 00:28:51.507116 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:51.507365 kubelet[2929]: E0517 00:28:51.507290 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.507365 kubelet[2929]: W0517 00:28:51.507300 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.507416 kubelet[2929]: E0517 00:28:51.507406 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:51.507564 kubelet[2929]: E0517 00:28:51.507497 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.507564 kubelet[2929]: W0517 00:28:51.507506 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.507626 kubelet[2929]: E0517 00:28:51.507577 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:51.507940 kubelet[2929]: E0517 00:28:51.507684 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.507940 kubelet[2929]: W0517 00:28:51.507692 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.507940 kubelet[2929]: E0517 00:28:51.507705 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:51.508008 kubelet[2929]: E0517 00:28:51.507961 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.508008 kubelet[2929]: W0517 00:28:51.507969 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.508008 kubelet[2929]: E0517 00:28:51.507986 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:51.508540 kubelet[2929]: E0517 00:28:51.508520 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.508540 kubelet[2929]: W0517 00:28:51.508533 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.508625 kubelet[2929]: E0517 00:28:51.508544 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:51.508880 kubelet[2929]: E0517 00:28:51.508861 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.508880 kubelet[2929]: W0517 00:28:51.508873 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.508994 kubelet[2929]: E0517 00:28:51.508887 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:51.509065 kubelet[2929]: E0517 00:28:51.509055 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.509065 kubelet[2929]: W0517 00:28:51.509063 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.509169 kubelet[2929]: E0517 00:28:51.509091 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:51.509204 kubelet[2929]: E0517 00:28:51.509185 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.509204 kubelet[2929]: W0517 00:28:51.509197 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.509323 kubelet[2929]: E0517 00:28:51.509223 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:51.509323 kubelet[2929]: E0517 00:28:51.509317 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.509420 kubelet[2929]: W0517 00:28:51.509324 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.509420 kubelet[2929]: E0517 00:28:51.509339 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:51.509487 kubelet[2929]: E0517 00:28:51.509481 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.509509 kubelet[2929]: W0517 00:28:51.509488 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.509509 kubelet[2929]: E0517 00:28:51.509497 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:51.510007 kubelet[2929]: E0517 00:28:51.509986 2929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:28:51.510007 kubelet[2929]: W0517 00:28:51.509997 2929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:28:51.510007 kubelet[2929]: E0517 00:28:51.510004 2929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:28:52.280008 containerd[1620]: time="2025-05-17T00:28:52.279949469Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:52.281478 containerd[1620]: time="2025-05-17T00:28:52.281416676Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 17 00:28:52.282339 containerd[1620]: time="2025-05-17T00:28:52.282290269Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:52.284679 containerd[1620]: time="2025-05-17T00:28:52.284640285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:52.285473 containerd[1620]: time="2025-05-17T00:28:52.285344709Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 1.515101776s" May 17 00:28:52.285473 containerd[1620]: time="2025-05-17T00:28:52.285382450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 17 00:28:52.287508 containerd[1620]: time="2025-05-17T00:28:52.287453834Z" level=info msg="CreateContainer within sandbox \"5509f790450feda8c14f8c845e32aaed955964b368a99ca169b8eb4acab7219a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 00:28:52.311510 containerd[1620]: time="2025-05-17T00:28:52.311452737Z" level=info msg="CreateContainer within sandbox \"5509f790450feda8c14f8c845e32aaed955964b368a99ca169b8eb4acab7219a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8119639a3dc960cecfc5697297b98621612d756cab039f5eeea94bac0e908804\"" May 17 00:28:52.313058 containerd[1620]: time="2025-05-17T00:28:52.312183700Z" level=info msg="StartContainer for \"8119639a3dc960cecfc5697297b98621612d756cab039f5eeea94bac0e908804\"" May 17 00:28:52.363951 kubelet[2929]: E0517 00:28:52.362213 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jv7dj" podUID="9def7614-7d88-42d1-ba99-91a4539e16ec" May 17 00:28:52.367610 containerd[1620]: time="2025-05-17T00:28:52.367554435Z" level=info msg="StartContainer for \"8119639a3dc960cecfc5697297b98621612d756cab039f5eeea94bac0e908804\" returns successfully" May 17 00:28:52.402005 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8119639a3dc960cecfc5697297b98621612d756cab039f5eeea94bac0e908804-rootfs.mount: Deactivated successfully. May 17 00:28:52.455794 containerd[1620]: time="2025-05-17T00:28:52.443168475Z" level=info msg="shim disconnected" id=8119639a3dc960cecfc5697297b98621612d756cab039f5eeea94bac0e908804 namespace=k8s.io May 17 00:28:52.455794 containerd[1620]: time="2025-05-17T00:28:52.455783904Z" level=warning msg="cleaning up after shim disconnected" id=8119639a3dc960cecfc5697297b98621612d756cab039f5eeea94bac0e908804 namespace=k8s.io May 17 00:28:52.455794 containerd[1620]: time="2025-05-17T00:28:52.455799604Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:28:52.476990 kubelet[2929]: I0517 00:28:52.476412 2929 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:28:53.482546 containerd[1620]: time="2025-05-17T00:28:53.482479171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 00:28:54.362492 kubelet[2929]: E0517 00:28:54.362439 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jv7dj" podUID="9def7614-7d88-42d1-ba99-91a4539e16ec" May 17 00:28:54.725089 kubelet[2929]: I0517 00:28:54.724709 2929 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:28:55.807530 containerd[1620]: time="2025-05-17T00:28:55.807487255Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:55.808813 containerd[1620]: time="2025-05-17T00:28:55.808713209Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 17 00:28:55.809973 containerd[1620]: time="2025-05-17T00:28:55.809931979Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:55.812559 containerd[1620]: time="2025-05-17T00:28:55.812005567Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:28:55.812559 containerd[1620]: time="2025-05-17T00:28:55.812470590Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 2.329946645s" May 17 00:28:55.812559 containerd[1620]: time="2025-05-17T00:28:55.812491850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 17 00:28:55.814832 containerd[1620]: time="2025-05-17T00:28:55.814796030Z" level=info msg="CreateContainer within sandbox \"5509f790450feda8c14f8c845e32aaed955964b368a99ca169b8eb4acab7219a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 00:28:55.832587 containerd[1620]: time="2025-05-17T00:28:55.832539133Z" level=info msg="CreateContainer within sandbox \"5509f790450feda8c14f8c845e32aaed955964b368a99ca169b8eb4acab7219a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"24f3aa324809c78bcc6007b194ffed2949974235cf65ac73f49893869340feef\"" May 17 00:28:55.833005 containerd[1620]: time="2025-05-17T00:28:55.832878431Z" level=info msg="StartContainer for \"24f3aa324809c78bcc6007b194ffed2949974235cf65ac73f49893869340feef\"" May 17 00:28:55.878157 containerd[1620]: time="2025-05-17T00:28:55.878107334Z" level=info msg="StartContainer for \"24f3aa324809c78bcc6007b194ffed2949974235cf65ac73f49893869340feef\" returns successfully" May 17 00:28:56.273338 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24f3aa324809c78bcc6007b194ffed2949974235cf65ac73f49893869340feef-rootfs.mount: Deactivated successfully. May 17 00:28:56.275462 containerd[1620]: time="2025-05-17T00:28:56.275409397Z" level=info msg="shim disconnected" id=24f3aa324809c78bcc6007b194ffed2949974235cf65ac73f49893869340feef namespace=k8s.io May 17 00:28:56.275462 containerd[1620]: time="2025-05-17T00:28:56.275458399Z" level=warning msg="cleaning up after shim disconnected" id=24f3aa324809c78bcc6007b194ffed2949974235cf65ac73f49893869340feef namespace=k8s.io May 17 00:28:56.276098 containerd[1620]: time="2025-05-17T00:28:56.275466865Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:28:56.299205 kubelet[2929]: I0517 00:28:56.299185 2929 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 00:28:56.343788 kubelet[2929]: I0517 00:28:56.342376 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cc79bb9-7a42-43e7-a121-7181de14309d-tigera-ca-bundle\") pod \"calico-kube-controllers-6b98696cc-8lr55\" (UID: \"0cc79bb9-7a42-43e7-a121-7181de14309d\") " pod="calico-system/calico-kube-controllers-6b98696cc-8lr55" May 17 00:28:56.343788 kubelet[2929]: I0517 00:28:56.342476 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cea38a36-7e2f-400d-bcde-dfc6cb61506d-config-volume\") pod \"coredns-7c65d6cfc9-2xxwm\" (UID: \"cea38a36-7e2f-400d-bcde-dfc6cb61506d\") " pod="kube-system/coredns-7c65d6cfc9-2xxwm" May 17 00:28:56.343788 kubelet[2929]: I0517 00:28:56.342516 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6plmq\" (UniqueName: \"kubernetes.io/projected/0cc79bb9-7a42-43e7-a121-7181de14309d-kube-api-access-6plmq\") pod \"calico-kube-controllers-6b98696cc-8lr55\" (UID: \"0cc79bb9-7a42-43e7-a121-7181de14309d\") " pod="calico-system/calico-kube-controllers-6b98696cc-8lr55" May 17 00:28:56.343788 kubelet[2929]: I0517 00:28:56.342538 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwpkd\" (UniqueName: \"kubernetes.io/projected/2834e46b-4ada-426a-b1e7-b513f359ad04-kube-api-access-gwpkd\") pod \"coredns-7c65d6cfc9-ppcxd\" (UID: \"2834e46b-4ada-426a-b1e7-b513f359ad04\") " pod="kube-system/coredns-7c65d6cfc9-ppcxd" May 17 00:28:56.343788 kubelet[2929]: I0517 00:28:56.342554 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef-goldmane-ca-bundle\") pod \"goldmane-8f77d7b6c-7b9h8\" (UID: \"fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef\") " pod="calico-system/goldmane-8f77d7b6c-7b9h8" May 17 00:28:56.344630 kubelet[2929]: I0517 00:28:56.342566 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r67z\" (UniqueName: \"kubernetes.io/projected/fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef-kube-api-access-4r67z\") pod \"goldmane-8f77d7b6c-7b9h8\" (UID: \"fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef\") " pod="calico-system/goldmane-8f77d7b6c-7b9h8" May 17 00:28:56.344630 kubelet[2929]: I0517 00:28:56.342589 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2834e46b-4ada-426a-b1e7-b513f359ad04-config-volume\") pod \"coredns-7c65d6cfc9-ppcxd\" (UID: \"2834e46b-4ada-426a-b1e7-b513f359ad04\") " pod="kube-system/coredns-7c65d6cfc9-ppcxd" May 17 00:28:56.344630 kubelet[2929]: I0517 00:28:56.342615 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef-goldmane-key-pair\") pod \"goldmane-8f77d7b6c-7b9h8\" (UID: \"fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef\") " pod="calico-system/goldmane-8f77d7b6c-7b9h8" May 17 00:28:56.344630 kubelet[2929]: I0517 00:28:56.342644 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef-config\") pod \"goldmane-8f77d7b6c-7b9h8\" (UID: \"fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef\") " pod="calico-system/goldmane-8f77d7b6c-7b9h8" May 17 00:28:56.344630 kubelet[2929]: I0517 00:28:56.342675 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q82fz\" (UniqueName: \"kubernetes.io/projected/cea38a36-7e2f-400d-bcde-dfc6cb61506d-kube-api-access-q82fz\") pod \"coredns-7c65d6cfc9-2xxwm\" (UID: \"cea38a36-7e2f-400d-bcde-dfc6cb61506d\") " pod="kube-system/coredns-7c65d6cfc9-2xxwm" May 17 00:28:56.381373 containerd[1620]: time="2025-05-17T00:28:56.380940514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jv7dj,Uid:9def7614-7d88-42d1-ba99-91a4539e16ec,Namespace:calico-system,Attempt:0,}" May 17 00:28:56.442884 kubelet[2929]: I0517 00:28:56.442852 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jl58\" (UniqueName: \"kubernetes.io/projected/0af51042-daa9-4020-979a-c14dc1a38805-kube-api-access-9jl58\") pod \"calico-apiserver-65b5bd8c4b-82zlb\" (UID: \"0af51042-daa9-4020-979a-c14dc1a38805\") " pod="calico-apiserver/calico-apiserver-65b5bd8c4b-82zlb" May 17 00:28:56.442884 kubelet[2929]: I0517 00:28:56.442884 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7q64\" (UniqueName: \"kubernetes.io/projected/0ffb55fb-0336-49f4-8e59-ee32d13fa830-kube-api-access-s7q64\") pod \"whisker-5c9dc4c697-5tjgw\" (UID: \"0ffb55fb-0336-49f4-8e59-ee32d13fa830\") " pod="calico-system/whisker-5c9dc4c697-5tjgw" May 17 00:28:56.443053 kubelet[2929]: I0517 00:28:56.442945 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/62172615-a35b-40f8-8043-de4d70d023f5-calico-apiserver-certs\") pod \"calico-apiserver-65b5bd8c4b-9dd9g\" (UID: \"62172615-a35b-40f8-8043-de4d70d023f5\") " pod="calico-apiserver/calico-apiserver-65b5bd8c4b-9dd9g" May 17 00:28:56.443053 kubelet[2929]: I0517 00:28:56.442960 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0ffb55fb-0336-49f4-8e59-ee32d13fa830-whisker-backend-key-pair\") pod \"whisker-5c9dc4c697-5tjgw\" (UID: \"0ffb55fb-0336-49f4-8e59-ee32d13fa830\") " pod="calico-system/whisker-5c9dc4c697-5tjgw" May 17 00:28:56.443053 kubelet[2929]: I0517 00:28:56.442972 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ffb55fb-0336-49f4-8e59-ee32d13fa830-whisker-ca-bundle\") pod \"whisker-5c9dc4c697-5tjgw\" (UID: \"0ffb55fb-0336-49f4-8e59-ee32d13fa830\") " pod="calico-system/whisker-5c9dc4c697-5tjgw" May 17 00:28:56.443053 kubelet[2929]: I0517 00:28:56.442998 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0af51042-daa9-4020-979a-c14dc1a38805-calico-apiserver-certs\") pod \"calico-apiserver-65b5bd8c4b-82zlb\" (UID: \"0af51042-daa9-4020-979a-c14dc1a38805\") " pod="calico-apiserver/calico-apiserver-65b5bd8c4b-82zlb" May 17 00:28:56.443352 kubelet[2929]: I0517 00:28:56.443067 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x26q\" (UniqueName: \"kubernetes.io/projected/62172615-a35b-40f8-8043-de4d70d023f5-kube-api-access-7x26q\") pod \"calico-apiserver-65b5bd8c4b-9dd9g\" (UID: \"62172615-a35b-40f8-8043-de4d70d023f5\") " pod="calico-apiserver/calico-apiserver-65b5bd8c4b-9dd9g" May 17 00:28:56.489399 containerd[1620]: time="2025-05-17T00:28:56.489353497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 00:28:56.540975 containerd[1620]: time="2025-05-17T00:28:56.540896473Z" level=error msg="Failed to destroy network for sandbox \"ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:56.544187 containerd[1620]: time="2025-05-17T00:28:56.544133896Z" level=error msg="encountered an error cleaning up failed sandbox \"ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:56.567336 containerd[1620]: time="2025-05-17T00:28:56.567287095Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jv7dj,Uid:9def7614-7d88-42d1-ba99-91a4539e16ec,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:56.567712 kubelet[2929]: E0517 00:28:56.567674 2929 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:56.567809 kubelet[2929]: E0517 00:28:56.567732 2929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jv7dj" May 17 00:28:56.567809 kubelet[2929]: E0517 00:28:56.567751 2929 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jv7dj" May 17 00:28:56.567809 kubelet[2929]: E0517 00:28:56.567789 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jv7dj_calico-system(9def7614-7d88-42d1-ba99-91a4539e16ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jv7dj_calico-system(9def7614-7d88-42d1-ba99-91a4539e16ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jv7dj" podUID="9def7614-7d88-42d1-ba99-91a4539e16ec" May 17 00:28:56.635886 containerd[1620]: time="2025-05-17T00:28:56.635811531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ppcxd,Uid:2834e46b-4ada-426a-b1e7-b513f359ad04,Namespace:kube-system,Attempt:0,}" May 17 00:28:56.642741 containerd[1620]: time="2025-05-17T00:28:56.642609716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2xxwm,Uid:cea38a36-7e2f-400d-bcde-dfc6cb61506d,Namespace:kube-system,Attempt:0,}" May 17 00:28:56.645599 containerd[1620]: time="2025-05-17T00:28:56.645558908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65b5bd8c4b-82zlb,Uid:0af51042-daa9-4020-979a-c14dc1a38805,Namespace:calico-apiserver,Attempt:0,}" May 17 00:28:56.657760 containerd[1620]: time="2025-05-17T00:28:56.657722835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-7b9h8,Uid:fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef,Namespace:calico-system,Attempt:0,}" May 17 00:28:56.661939 containerd[1620]: time="2025-05-17T00:28:56.661777154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c9dc4c697-5tjgw,Uid:0ffb55fb-0336-49f4-8e59-ee32d13fa830,Namespace:calico-system,Attempt:0,}" May 17 00:28:56.662602 containerd[1620]: time="2025-05-17T00:28:56.662365780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b98696cc-8lr55,Uid:0cc79bb9-7a42-43e7-a121-7181de14309d,Namespace:calico-system,Attempt:0,}" May 17 00:28:56.663957 containerd[1620]: time="2025-05-17T00:28:56.663830382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65b5bd8c4b-9dd9g,Uid:62172615-a35b-40f8-8043-de4d70d023f5,Namespace:calico-apiserver,Attempt:0,}" May 17 00:28:56.747761 containerd[1620]: time="2025-05-17T00:28:56.747232947Z" level=error msg="Failed to destroy network for sandbox \"6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:56.749221 containerd[1620]: time="2025-05-17T00:28:56.748726534Z" level=error msg="encountered an error cleaning up failed sandbox \"6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:56.749221 containerd[1620]: time="2025-05-17T00:28:56.748774954Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ppcxd,Uid:2834e46b-4ada-426a-b1e7-b513f359ad04,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:56.750696 kubelet[2929]: E0517 00:28:56.750156 2929 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:56.750696 kubelet[2929]: E0517 00:28:56.750216 2929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-ppcxd" May 17 00:28:56.750696 kubelet[2929]: E0517 00:28:56.750233 2929 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-ppcxd" May 17 00:28:56.750796 kubelet[2929]: E0517 00:28:56.750271 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-ppcxd_kube-system(2834e46b-4ada-426a-b1e7-b513f359ad04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-ppcxd_kube-system(2834e46b-4ada-426a-b1e7-b513f359ad04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-ppcxd" podUID="2834e46b-4ada-426a-b1e7-b513f359ad04" May 17 00:28:56.806691 containerd[1620]: time="2025-05-17T00:28:56.806541596Z" level=error msg="Failed to destroy network for sandbox \"469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:56.807130 containerd[1620]: time="2025-05-17T00:28:56.807101438Z" level=error msg="encountered an error cleaning up failed sandbox \"469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:56.807181 containerd[1620]: time="2025-05-17T00:28:56.807147354Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65b5bd8c4b-9dd9g,Uid:62172615-a35b-40f8-8043-de4d70d023f5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:56.808634 kubelet[2929]: E0517 00:28:56.807781 2929 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:56.808634 kubelet[2929]: E0517 00:28:56.808630 2929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65b5bd8c4b-9dd9g" May 17 00:28:56.808748 kubelet[2929]: E0517 00:28:56.808658 2929 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65b5bd8c4b-9dd9g" May 17 00:28:56.808748 kubelet[2929]: E0517 00:28:56.808699 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65b5bd8c4b-9dd9g_calico-apiserver(62172615-a35b-40f8-8043-de4d70d023f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65b5bd8c4b-9dd9g_calico-apiserver(62172615-a35b-40f8-8043-de4d70d023f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65b5bd8c4b-9dd9g" podUID="62172615-a35b-40f8-8043-de4d70d023f5" May 17 00:28:56.842291 containerd[1620]: time="2025-05-17T00:28:56.842167562Z" level=error msg="Failed to destroy network for sandbox \"28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:56.844159 containerd[1620]: time="2025-05-17T00:28:56.844100574Z" level=error msg="encountered an error cleaning up failed sandbox \"28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:56.844274 containerd[1620]: time="2025-05-17T00:28:56.844255896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c9dc4c697-5tjgw,Uid:0ffb55fb-0336-49f4-8e59-ee32d13fa830,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:56.848046 kubelet[2929]: E0517 00:28:56.845150 2929 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:56.848046 kubelet[2929]: E0517 00:28:56.845211 2929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c9dc4c697-5tjgw" May 17 00:28:56.848046 kubelet[2929]: E0517 00:28:56.845228 2929 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c9dc4c697-5tjgw" May 17 00:28:56.846443 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee-shm.mount: Deactivated successfully. May 17 00:28:56.851112 kubelet[2929]: E0517 00:28:56.846163 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5c9dc4c697-5tjgw_calico-system(0ffb55fb-0336-49f4-8e59-ee32d13fa830)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5c9dc4c697-5tjgw_calico-system(0ffb55fb-0336-49f4-8e59-ee32d13fa830)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5c9dc4c697-5tjgw" podUID="0ffb55fb-0336-49f4-8e59-ee32d13fa830" May 17 00:28:56.850199 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536-shm.mount: Deactivated successfully. May 17 00:28:56.859843 containerd[1620]: time="2025-05-17T00:28:56.859793924Z" level=error msg="Failed to destroy network for sandbox \"ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:56.861565 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06-shm.mount: Deactivated successfully. May 17 00:28:56.862987 containerd[1620]: time="2025-05-17T00:28:56.862659739Z" level=error msg="encountered an error cleaning up failed sandbox \"ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:56.862987 containerd[1620]: time="2025-05-17T00:28:56.862696299Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b98696cc-8lr55,Uid:0cc79bb9-7a42-43e7-a121-7181de14309d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:56.862987 containerd[1620]: time="2025-05-17T00:28:56.862807016Z" level=error msg="Failed to destroy network for sandbox \"70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:56.864522 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9-shm.mount: Deactivated successfully. May 17 00:28:56.868218 kubelet[2929]: E0517 00:28:56.864987 2929 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:56.868218 kubelet[2929]: E0517 00:28:56.865109 2929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b98696cc-8lr55" May 17 00:28:56.868218 kubelet[2929]: E0517 00:28:56.865125 2929 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b98696cc-8lr55" May 17 00:28:56.868331 kubelet[2929]: E0517 00:28:56.867510 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b98696cc-8lr55_calico-system(0cc79bb9-7a42-43e7-a121-7181de14309d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b98696cc-8lr55_calico-system(0cc79bb9-7a42-43e7-a121-7181de14309d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b98696cc-8lr55" podUID="0cc79bb9-7a42-43e7-a121-7181de14309d" May 17 00:28:56.869191 containerd[1620]: time="2025-05-17T00:28:56.868520152Z" level=error msg="encountered an error cleaning up failed sandbox \"70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:56.869479 containerd[1620]: time="2025-05-17T00:28:56.869453295Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65b5bd8c4b-82zlb,Uid:0af51042-daa9-4020-979a-c14dc1a38805,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:56.870281 kubelet[2929]: E0517 00:28:56.870263 2929 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:56.870374 kubelet[2929]: E0517 00:28:56.870361 2929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65b5bd8c4b-82zlb" May 17 00:28:56.870436 kubelet[2929]: E0517 00:28:56.870425 2929 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65b5bd8c4b-82zlb" May 17 00:28:56.870536 kubelet[2929]: E0517 00:28:56.870517 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65b5bd8c4b-82zlb_calico-apiserver(0af51042-daa9-4020-979a-c14dc1a38805)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65b5bd8c4b-82zlb_calico-apiserver(0af51042-daa9-4020-979a-c14dc1a38805)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65b5bd8c4b-82zlb" podUID="0af51042-daa9-4020-979a-c14dc1a38805" May 17 00:28:56.884216 containerd[1620]: time="2025-05-17T00:28:56.884085271Z" level=error msg="Failed to destroy network for sandbox \"78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:56.884876 containerd[1620]: time="2025-05-17T00:28:56.884634943Z" level=error msg="encountered an error cleaning up failed sandbox \"78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:56.884876 containerd[1620]: time="2025-05-17T00:28:56.884839287Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2xxwm,Uid:cea38a36-7e2f-400d-bcde-dfc6cb61506d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:56.885112 containerd[1620]: time="2025-05-17T00:28:56.884807177Z" level=error msg="Failed to destroy network for sandbox \"fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:56.885483 containerd[1620]: time="2025-05-17T00:28:56.885436029Z" level=error msg="encountered an error cleaning up failed sandbox \"fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:56.885588 containerd[1620]: time="2025-05-17T00:28:56.885534745Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-7b9h8,Uid:fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:56.885868 kubelet[2929]: E0517 00:28:56.885840 2929 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:56.885925 kubelet[2929]: E0517 00:28:56.885874 2929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-7b9h8" May 17 00:28:56.885925 kubelet[2929]: E0517 00:28:56.885888 2929 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-7b9h8" May 17 00:28:56.885975 kubelet[2929]: E0517 00:28:56.885924 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-8f77d7b6c-7b9h8_calico-system(fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-8f77d7b6c-7b9h8_calico-system(fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-7b9h8" podUID="fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef" May 17 00:28:56.886437 kubelet[2929]: E0517 00:28:56.886408 2929 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:56.886470 kubelet[2929]: E0517 00:28:56.886439 2929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2xxwm" May 17 00:28:56.886470 kubelet[2929]: E0517 00:28:56.886465 2929 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2xxwm" May 17 00:28:56.886514 kubelet[2929]: E0517 00:28:56.886489 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-2xxwm_kube-system(cea38a36-7e2f-400d-bcde-dfc6cb61506d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-2xxwm_kube-system(cea38a36-7e2f-400d-bcde-dfc6cb61506d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-2xxwm" podUID="cea38a36-7e2f-400d-bcde-dfc6cb61506d" May 17 00:28:56.887566 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc-shm.mount: Deactivated successfully. May 17 00:28:56.887669 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023-shm.mount: Deactivated successfully. May 17 00:28:57.490816 kubelet[2929]: I0517 00:28:57.490780 2929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" May 17 00:28:57.494934 kubelet[2929]: I0517 00:28:57.494496 2929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" May 17 00:28:57.494994 containerd[1620]: time="2025-05-17T00:28:57.494874405Z" level=info msg="StopPodSandbox for \"ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06\"" May 17 00:28:57.496825 containerd[1620]: time="2025-05-17T00:28:57.496381026Z" level=info msg="Ensure that sandbox ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06 in task-service has been cleanup successfully" May 17 00:28:57.497838 containerd[1620]: time="2025-05-17T00:28:57.497807015Z" level=info msg="StopPodSandbox for \"6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4\"" May 17 00:28:57.498508 containerd[1620]: time="2025-05-17T00:28:57.498403727Z" level=info msg="Ensure that sandbox 6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4 in task-service has been cleanup successfully" May 17 00:28:57.498992 kubelet[2929]: I0517 00:28:57.498720 2929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" May 17 00:28:57.499299 containerd[1620]: time="2025-05-17T00:28:57.499273661Z" level=info msg="StopPodSandbox for \"fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc\"" May 17 00:28:57.500616 containerd[1620]: time="2025-05-17T00:28:57.500574355Z" level=info msg="Ensure that sandbox fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc in task-service has been cleanup successfully" May 17 00:28:57.502764 kubelet[2929]: I0517 00:28:57.502697 2929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" May 17 00:28:57.503996 containerd[1620]: time="2025-05-17T00:28:57.503971800Z" level=info msg="StopPodSandbox for \"469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f\"" May 17 00:28:57.504140 containerd[1620]: time="2025-05-17T00:28:57.504117223Z" level=info msg="Ensure that sandbox 469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f in task-service has been cleanup successfully" May 17 00:28:57.507936 kubelet[2929]: I0517 00:28:57.507902 2929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" May 17 00:28:57.510622 containerd[1620]: time="2025-05-17T00:28:57.510598701Z" level=info msg="StopPodSandbox for \"ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee\"" May 17 00:28:57.510863 containerd[1620]: time="2025-05-17T00:28:57.510835437Z" level=info msg="Ensure that sandbox ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee in task-service has been cleanup successfully" May 17 00:28:57.515485 kubelet[2929]: I0517 00:28:57.515363 2929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" May 17 00:28:57.517753 containerd[1620]: time="2025-05-17T00:28:57.517718701Z" level=info msg="StopPodSandbox for \"28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536\"" May 17 00:28:57.518318 kubelet[2929]: I0517 00:28:57.518214 2929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" May 17 00:28:57.518415 containerd[1620]: time="2025-05-17T00:28:57.518287240Z" level=info msg="Ensure that sandbox 28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536 in task-service has been cleanup successfully" May 17 00:28:57.520697 containerd[1620]: time="2025-05-17T00:28:57.520677981Z" level=info msg="StopPodSandbox for \"78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023\"" May 17 00:28:57.521686 kubelet[2929]: I0517 00:28:57.521673 2929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" May 17 00:28:57.523503 containerd[1620]: time="2025-05-17T00:28:57.523330576Z" level=info msg="Ensure that sandbox 78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023 in task-service has been cleanup successfully" May 17 00:28:57.523791 containerd[1620]: time="2025-05-17T00:28:57.523775242Z" level=info msg="StopPodSandbox for \"70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9\"" May 17 00:28:57.523955 containerd[1620]: time="2025-05-17T00:28:57.523940652Z" level=info msg="Ensure that sandbox 70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9 in task-service has been cleanup successfully" May 17 00:28:57.580520 containerd[1620]: time="2025-05-17T00:28:57.580481657Z" level=error msg="StopPodSandbox for \"28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536\" failed" error="failed to destroy network for sandbox \"28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:57.581118 kubelet[2929]: E0517 00:28:57.580836 2929 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" May 17 00:28:57.581118 kubelet[2929]: E0517 00:28:57.580967 2929 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536"} May 17 00:28:57.581118 kubelet[2929]: E0517 00:28:57.581059 2929 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0ffb55fb-0336-49f4-8e59-ee32d13fa830\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:28:57.581118 kubelet[2929]: E0517 00:28:57.581080 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0ffb55fb-0336-49f4-8e59-ee32d13fa830\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5c9dc4c697-5tjgw" podUID="0ffb55fb-0336-49f4-8e59-ee32d13fa830" May 17 00:28:57.582335 containerd[1620]: time="2025-05-17T00:28:57.582284865Z" level=error msg="StopPodSandbox for \"fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc\" failed" error="failed to destroy network for sandbox \"fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:57.582607 kubelet[2929]: E0517 00:28:57.582520 2929 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" May 17 00:28:57.582607 kubelet[2929]: E0517 00:28:57.582548 2929 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc"} May 17 00:28:57.582607 kubelet[2929]: E0517 00:28:57.582572 2929 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:28:57.582607 kubelet[2929]: E0517 00:28:57.582588 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-7b9h8" podUID="fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef" May 17 00:28:57.587976 containerd[1620]: time="2025-05-17T00:28:57.587954109Z" level=error msg="StopPodSandbox for \"70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9\" failed" error="failed to destroy network for sandbox \"70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:57.588352 kubelet[2929]: E0517 00:28:57.588259 2929 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" May 17 00:28:57.588352 kubelet[2929]: E0517 00:28:57.588291 2929 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9"} May 17 00:28:57.588352 kubelet[2929]: E0517 00:28:57.588315 2929 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0af51042-daa9-4020-979a-c14dc1a38805\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:28:57.588352 kubelet[2929]: E0517 00:28:57.588332 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0af51042-daa9-4020-979a-c14dc1a38805\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65b5bd8c4b-82zlb" podUID="0af51042-daa9-4020-979a-c14dc1a38805" May 17 00:28:57.589806 containerd[1620]: time="2025-05-17T00:28:57.589784267Z" level=error msg="StopPodSandbox for \"ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06\" failed" error="failed to destroy network for sandbox \"ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:57.590019 kubelet[2929]: E0517 00:28:57.590000 2929 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" May 17 00:28:57.590191 kubelet[2929]: E0517 00:28:57.590125 2929 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06"} May 17 00:28:57.590191 kubelet[2929]: E0517 00:28:57.590152 2929 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0cc79bb9-7a42-43e7-a121-7181de14309d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:28:57.590191 kubelet[2929]: E0517 00:28:57.590171 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0cc79bb9-7a42-43e7-a121-7181de14309d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b98696cc-8lr55" podUID="0cc79bb9-7a42-43e7-a121-7181de14309d" May 17 00:28:57.596686 containerd[1620]: time="2025-05-17T00:28:57.596284532Z" level=error msg="StopPodSandbox for \"6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4\" failed" error="failed to destroy network for sandbox \"6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:57.596893 kubelet[2929]: E0517 00:28:57.596441 2929 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" May 17 00:28:57.596893 kubelet[2929]: E0517 00:28:57.596472 2929 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4"} May 17 00:28:57.596893 kubelet[2929]: E0517 00:28:57.596496 2929 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2834e46b-4ada-426a-b1e7-b513f359ad04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:28:57.596893 kubelet[2929]: E0517 00:28:57.596518 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2834e46b-4ada-426a-b1e7-b513f359ad04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-ppcxd" podUID="2834e46b-4ada-426a-b1e7-b513f359ad04" May 17 00:28:57.599067 containerd[1620]: time="2025-05-17T00:28:57.598195693Z" level=error msg="StopPodSandbox for \"469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f\" failed" error="failed to destroy network for sandbox \"469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:57.599110 kubelet[2929]: E0517 00:28:57.598399 2929 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" May 17 00:28:57.599110 kubelet[2929]: E0517 00:28:57.598454 2929 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f"} May 17 00:28:57.599110 kubelet[2929]: E0517 00:28:57.598477 2929 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"62172615-a35b-40f8-8043-de4d70d023f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:28:57.599110 kubelet[2929]: E0517 00:28:57.598493 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"62172615-a35b-40f8-8043-de4d70d023f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65b5bd8c4b-9dd9g" podUID="62172615-a35b-40f8-8043-de4d70d023f5" May 17 00:28:57.605281 containerd[1620]: time="2025-05-17T00:28:57.605134852Z" level=error msg="StopPodSandbox for \"78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023\" failed" error="failed to destroy network for sandbox \"78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:57.605549 kubelet[2929]: E0517 00:28:57.605498 2929 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" May 17 00:28:57.605695 kubelet[2929]: E0517 00:28:57.605628 2929 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023"} May 17 00:28:57.605695 kubelet[2929]: E0517 00:28:57.605658 2929 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cea38a36-7e2f-400d-bcde-dfc6cb61506d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:28:57.605695 kubelet[2929]: E0517 00:28:57.605674 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cea38a36-7e2f-400d-bcde-dfc6cb61506d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-2xxwm" podUID="cea38a36-7e2f-400d-bcde-dfc6cb61506d" May 17 00:28:57.609342 containerd[1620]: time="2025-05-17T00:28:57.609316340Z" level=error msg="StopPodSandbox for \"ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee\" failed" error="failed to destroy network for sandbox \"ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:28:57.609490 kubelet[2929]: E0517 00:28:57.609465 2929 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" May 17 00:28:57.609490 kubelet[2929]: E0517 00:28:57.609494 2929 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee"} May 17 00:28:57.609587 kubelet[2929]: E0517 00:28:57.609521 2929 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9def7614-7d88-42d1-ba99-91a4539e16ec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:28:57.609587 kubelet[2929]: E0517 00:28:57.609537 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9def7614-7d88-42d1-ba99-91a4539e16ec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jv7dj" podUID="9def7614-7d88-42d1-ba99-91a4539e16ec" May 17 00:29:00.171845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3047666383.mount: Deactivated successfully. May 17 00:29:00.222252 containerd[1620]: time="2025-05-17T00:29:00.222183200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 17 00:29:00.246759 containerd[1620]: time="2025-05-17T00:29:00.246650081Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:00.263267 containerd[1620]: time="2025-05-17T00:29:00.263069761Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:00.263668 containerd[1620]: time="2025-05-17T00:29:00.263637098Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:00.268301 containerd[1620]: time="2025-05-17T00:29:00.268259151Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 3.774763273s" May 17 00:29:00.268301 containerd[1620]: time="2025-05-17T00:29:00.268294869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 17 00:29:00.341307 containerd[1620]: time="2025-05-17T00:29:00.341270592Z" level=info msg="CreateContainer within sandbox \"5509f790450feda8c14f8c845e32aaed955964b368a99ca169b8eb4acab7219a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 00:29:00.413957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2875980043.mount: Deactivated successfully. May 17 00:29:00.435018 containerd[1620]: time="2025-05-17T00:29:00.434776992Z" level=info msg="CreateContainer within sandbox \"5509f790450feda8c14f8c845e32aaed955964b368a99ca169b8eb4acab7219a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3f1761130c0f3181aef79309d949df69457b0f59c43febe1a845132d6dd72f10\"" May 17 00:29:00.450883 containerd[1620]: time="2025-05-17T00:29:00.450753178Z" level=info msg="StartContainer for \"3f1761130c0f3181aef79309d949df69457b0f59c43febe1a845132d6dd72f10\"" May 17 00:29:00.570639 containerd[1620]: time="2025-05-17T00:29:00.570499388Z" level=info msg="StartContainer for \"3f1761130c0f3181aef79309d949df69457b0f59c43febe1a845132d6dd72f10\" returns successfully" May 17 00:29:00.640244 kubelet[2929]: E0517 00:29:00.639998 2929 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/pode2ac4d5e-7726-4c62-8451-832e05281ef4/3f1761130c0f3181aef79309d949df69457b0f59c43febe1a845132d6dd72f10\": RecentStats: unable to find data in memory cache]" May 17 00:29:00.664129 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 00:29:00.667840 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 00:29:00.857457 containerd[1620]: time="2025-05-17T00:29:00.857411415Z" level=info msg="StopPodSandbox for \"28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536\"" May 17 00:29:01.173174 containerd[1620]: 2025-05-17 00:29:00.937 [INFO][4099] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" May 17 00:29:01.173174 containerd[1620]: 2025-05-17 00:29:00.938 [INFO][4099] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" iface="eth0" netns="/var/run/netns/cni-79140b0c-95fc-f346-a935-11656c71e951" May 17 00:29:01.173174 containerd[1620]: 2025-05-17 00:29:00.939 [INFO][4099] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" iface="eth0" netns="/var/run/netns/cni-79140b0c-95fc-f346-a935-11656c71e951" May 17 00:29:01.173174 containerd[1620]: 2025-05-17 00:29:00.940 [INFO][4099] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" iface="eth0" netns="/var/run/netns/cni-79140b0c-95fc-f346-a935-11656c71e951" May 17 00:29:01.173174 containerd[1620]: 2025-05-17 00:29:00.940 [INFO][4099] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" May 17 00:29:01.173174 containerd[1620]: 2025-05-17 00:29:00.940 [INFO][4099] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" May 17 00:29:01.173174 containerd[1620]: 2025-05-17 00:29:01.153 [INFO][4106] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" HandleID="k8s-pod-network.28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" Workload="ci--4081--3--3--n--556bea0d1e-k8s-whisker--5c9dc4c697--5tjgw-eth0" May 17 00:29:01.173174 containerd[1620]: 2025-05-17 00:29:01.158 [INFO][4106] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:29:01.173174 containerd[1620]: 2025-05-17 00:29:01.158 [INFO][4106] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:29:01.173174 containerd[1620]: 2025-05-17 00:29:01.166 [WARNING][4106] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" HandleID="k8s-pod-network.28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" Workload="ci--4081--3--3--n--556bea0d1e-k8s-whisker--5c9dc4c697--5tjgw-eth0" May 17 00:29:01.173174 containerd[1620]: 2025-05-17 00:29:01.166 [INFO][4106] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" HandleID="k8s-pod-network.28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" Workload="ci--4081--3--3--n--556bea0d1e-k8s-whisker--5c9dc4c697--5tjgw-eth0" May 17 00:29:01.173174 containerd[1620]: 2025-05-17 00:29:01.168 [INFO][4106] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:29:01.173174 containerd[1620]: 2025-05-17 00:29:01.169 [INFO][4099] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" May 17 00:29:01.173174 containerd[1620]: time="2025-05-17T00:29:01.171523398Z" level=info msg="TearDown network for sandbox \"28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536\" successfully" May 17 00:29:01.173174 containerd[1620]: time="2025-05-17T00:29:01.171544548Z" level=info msg="StopPodSandbox for \"28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536\" returns successfully" May 17 00:29:01.178520 systemd[1]: run-netns-cni\x2d79140b0c\x2d95fc\x2df346\x2da935\x2d11656c71e951.mount: Deactivated successfully. May 17 00:29:01.215235 systemd-resolved[1509]: Under memory pressure, flushing caches. May 17 00:29:01.220315 systemd-journald[1178]: Under memory pressure, flushing caches. May 17 00:29:01.215299 systemd-resolved[1509]: Flushed all caches. May 17 00:29:01.312395 kubelet[2929]: I0517 00:29:01.311991 2929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ffb55fb-0336-49f4-8e59-ee32d13fa830-whisker-ca-bundle\") pod \"0ffb55fb-0336-49f4-8e59-ee32d13fa830\" (UID: \"0ffb55fb-0336-49f4-8e59-ee32d13fa830\") " May 17 00:29:01.312395 kubelet[2929]: I0517 00:29:01.312067 2929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0ffb55fb-0336-49f4-8e59-ee32d13fa830-whisker-backend-key-pair\") pod \"0ffb55fb-0336-49f4-8e59-ee32d13fa830\" (UID: \"0ffb55fb-0336-49f4-8e59-ee32d13fa830\") " May 17 00:29:01.312395 kubelet[2929]: I0517 00:29:01.312093 2929 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7q64\" (UniqueName: \"kubernetes.io/projected/0ffb55fb-0336-49f4-8e59-ee32d13fa830-kube-api-access-s7q64\") pod \"0ffb55fb-0336-49f4-8e59-ee32d13fa830\" (UID: \"0ffb55fb-0336-49f4-8e59-ee32d13fa830\") " May 17 00:29:01.328224 systemd[1]: var-lib-kubelet-pods-0ffb55fb\x2d0336\x2d49f4\x2d8e59\x2dee32d13fa830-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds7q64.mount: Deactivated successfully. May 17 00:29:01.329925 kubelet[2929]: I0517 00:29:01.327453 2929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ffb55fb-0336-49f4-8e59-ee32d13fa830-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0ffb55fb-0336-49f4-8e59-ee32d13fa830" (UID: "0ffb55fb-0336-49f4-8e59-ee32d13fa830"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:29:01.330136 kubelet[2929]: I0517 00:29:01.330117 2929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ffb55fb-0336-49f4-8e59-ee32d13fa830-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0ffb55fb-0336-49f4-8e59-ee32d13fa830" (UID: "0ffb55fb-0336-49f4-8e59-ee32d13fa830"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:29:01.330236 kubelet[2929]: I0517 00:29:01.325393 2929 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ffb55fb-0336-49f4-8e59-ee32d13fa830-kube-api-access-s7q64" (OuterVolumeSpecName: "kube-api-access-s7q64") pod "0ffb55fb-0336-49f4-8e59-ee32d13fa830" (UID: "0ffb55fb-0336-49f4-8e59-ee32d13fa830"). InnerVolumeSpecName "kube-api-access-s7q64". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:29:01.332381 systemd[1]: var-lib-kubelet-pods-0ffb55fb\x2d0336\x2d49f4\x2d8e59\x2dee32d13fa830-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 17 00:29:01.413214 kubelet[2929]: I0517 00:29:01.413165 2929 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ffb55fb-0336-49f4-8e59-ee32d13fa830-whisker-ca-bundle\") on node \"ci-4081-3-3-n-556bea0d1e\" DevicePath \"\"" May 17 00:29:01.413214 kubelet[2929]: I0517 00:29:01.413206 2929 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7q64\" (UniqueName: \"kubernetes.io/projected/0ffb55fb-0336-49f4-8e59-ee32d13fa830-kube-api-access-s7q64\") on node \"ci-4081-3-3-n-556bea0d1e\" DevicePath \"\"" May 17 00:29:01.413214 kubelet[2929]: I0517 00:29:01.413216 2929 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0ffb55fb-0336-49f4-8e59-ee32d13fa830-whisker-backend-key-pair\") on node \"ci-4081-3-3-n-556bea0d1e\" DevicePath \"\"" May 17 00:29:01.634705 kubelet[2929]: I0517 00:29:01.630830 2929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mfnx9" podStartSLOduration=2.48101838 podStartE2EDuration="14.627952293s" podCreationTimestamp="2025-05-17 00:28:47 +0000 UTC" firstStartedPulling="2025-05-17 00:28:48.132383506 +0000 UTC m=+17.854813821" lastFinishedPulling="2025-05-17 00:29:00.279317419 +0000 UTC m=+30.001747734" observedRunningTime="2025-05-17 00:29:01.627122013 +0000 UTC m=+31.349552348" watchObservedRunningTime="2025-05-17 00:29:01.627952293 +0000 UTC m=+31.350382618" May 17 00:29:01.819282 kubelet[2929]: I0517 00:29:01.819226 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/edf2dfa8-9e01-4421-aebb-92beec01d94f-whisker-backend-key-pair\") pod \"whisker-5f94896dd9-9mn8s\" (UID: \"edf2dfa8-9e01-4421-aebb-92beec01d94f\") " pod="calico-system/whisker-5f94896dd9-9mn8s" May 17 00:29:01.819661 kubelet[2929]: I0517 00:29:01.819287 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67frm\" (UniqueName: \"kubernetes.io/projected/edf2dfa8-9e01-4421-aebb-92beec01d94f-kube-api-access-67frm\") pod \"whisker-5f94896dd9-9mn8s\" (UID: \"edf2dfa8-9e01-4421-aebb-92beec01d94f\") " pod="calico-system/whisker-5f94896dd9-9mn8s" May 17 00:29:01.819661 kubelet[2929]: I0517 00:29:01.819321 2929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/edf2dfa8-9e01-4421-aebb-92beec01d94f-whisker-ca-bundle\") pod \"whisker-5f94896dd9-9mn8s\" (UID: \"edf2dfa8-9e01-4421-aebb-92beec01d94f\") " pod="calico-system/whisker-5f94896dd9-9mn8s" May 17 00:29:01.967039 containerd[1620]: time="2025-05-17T00:29:01.966917864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f94896dd9-9mn8s,Uid:edf2dfa8-9e01-4421-aebb-92beec01d94f,Namespace:calico-system,Attempt:0,}" May 17 00:29:02.107784 systemd-networkd[1251]: cali0461d037d10: Link UP May 17 00:29:02.107977 systemd-networkd[1251]: cali0461d037d10: Gained carrier May 17 00:29:02.129963 containerd[1620]: 2025-05-17 00:29:02.014 [INFO][4127] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:29:02.129963 containerd[1620]: 2025-05-17 00:29:02.024 [INFO][4127] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--556bea0d1e-k8s-whisker--5f94896dd9--9mn8s-eth0 whisker-5f94896dd9- calico-system edf2dfa8-9e01-4421-aebb-92beec01d94f 863 0 2025-05-17 00:29:01 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5f94896dd9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-3-n-556bea0d1e whisker-5f94896dd9-9mn8s eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0461d037d10 [] [] }} ContainerID="2a07c1e4f1785b1b9034a771773e04b8123623deecac354a07f821efbb13364b" Namespace="calico-system" Pod="whisker-5f94896dd9-9mn8s" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-whisker--5f94896dd9--9mn8s-" May 17 00:29:02.129963 containerd[1620]: 2025-05-17 00:29:02.025 [INFO][4127] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a07c1e4f1785b1b9034a771773e04b8123623deecac354a07f821efbb13364b" Namespace="calico-system" Pod="whisker-5f94896dd9-9mn8s" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-whisker--5f94896dd9--9mn8s-eth0" May 17 00:29:02.129963 containerd[1620]: 2025-05-17 00:29:02.052 [INFO][4140] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a07c1e4f1785b1b9034a771773e04b8123623deecac354a07f821efbb13364b" HandleID="k8s-pod-network.2a07c1e4f1785b1b9034a771773e04b8123623deecac354a07f821efbb13364b" Workload="ci--4081--3--3--n--556bea0d1e-k8s-whisker--5f94896dd9--9mn8s-eth0" May 17 00:29:02.129963 containerd[1620]: 2025-05-17 00:29:02.052 [INFO][4140] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2a07c1e4f1785b1b9034a771773e04b8123623deecac354a07f821efbb13364b" HandleID="k8s-pod-network.2a07c1e4f1785b1b9034a771773e04b8123623deecac354a07f821efbb13364b" Workload="ci--4081--3--3--n--556bea0d1e-k8s-whisker--5f94896dd9--9mn8s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000233020), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-556bea0d1e", "pod":"whisker-5f94896dd9-9mn8s", "timestamp":"2025-05-17 00:29:02.052265646 +0000 UTC"}, Hostname:"ci-4081-3-3-n-556bea0d1e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:29:02.129963 containerd[1620]: 2025-05-17 00:29:02.052 [INFO][4140] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:29:02.129963 containerd[1620]: 2025-05-17 00:29:02.052 [INFO][4140] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:29:02.129963 containerd[1620]: 2025-05-17 00:29:02.052 [INFO][4140] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-556bea0d1e' May 17 00:29:02.129963 containerd[1620]: 2025-05-17 00:29:02.059 [INFO][4140] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2a07c1e4f1785b1b9034a771773e04b8123623deecac354a07f821efbb13364b" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:02.129963 containerd[1620]: 2025-05-17 00:29:02.067 [INFO][4140] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:02.129963 containerd[1620]: 2025-05-17 00:29:02.072 [INFO][4140] ipam/ipam.go 511: Trying affinity for 192.168.15.64/26 host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:02.129963 containerd[1620]: 2025-05-17 00:29:02.074 [INFO][4140] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.64/26 host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:02.129963 containerd[1620]: 2025-05-17 00:29:02.075 [INFO][4140] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.64/26 host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:02.129963 containerd[1620]: 2025-05-17 00:29:02.075 [INFO][4140] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.64/26 handle="k8s-pod-network.2a07c1e4f1785b1b9034a771773e04b8123623deecac354a07f821efbb13364b" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:02.129963 containerd[1620]: 2025-05-17 00:29:02.077 [INFO][4140] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2a07c1e4f1785b1b9034a771773e04b8123623deecac354a07f821efbb13364b May 17 00:29:02.129963 containerd[1620]: 2025-05-17 00:29:02.080 [INFO][4140] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.64/26 handle="k8s-pod-network.2a07c1e4f1785b1b9034a771773e04b8123623deecac354a07f821efbb13364b" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:02.129963 containerd[1620]: 2025-05-17 00:29:02.086 [INFO][4140] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.65/26] block=192.168.15.64/26 handle="k8s-pod-network.2a07c1e4f1785b1b9034a771773e04b8123623deecac354a07f821efbb13364b" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:02.129963 containerd[1620]: 2025-05-17 00:29:02.086 [INFO][4140] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.65/26] handle="k8s-pod-network.2a07c1e4f1785b1b9034a771773e04b8123623deecac354a07f821efbb13364b" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:02.129963 containerd[1620]: 2025-05-17 00:29:02.086 [INFO][4140] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:29:02.129963 containerd[1620]: 2025-05-17 00:29:02.086 [INFO][4140] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.65/26] IPv6=[] ContainerID="2a07c1e4f1785b1b9034a771773e04b8123623deecac354a07f821efbb13364b" HandleID="k8s-pod-network.2a07c1e4f1785b1b9034a771773e04b8123623deecac354a07f821efbb13364b" Workload="ci--4081--3--3--n--556bea0d1e-k8s-whisker--5f94896dd9--9mn8s-eth0" May 17 00:29:02.130692 containerd[1620]: 2025-05-17 00:29:02.090 [INFO][4127] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a07c1e4f1785b1b9034a771773e04b8123623deecac354a07f821efbb13364b" Namespace="calico-system" Pod="whisker-5f94896dd9-9mn8s" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-whisker--5f94896dd9--9mn8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--556bea0d1e-k8s-whisker--5f94896dd9--9mn8s-eth0", GenerateName:"whisker-5f94896dd9-", Namespace:"calico-system", SelfLink:"", UID:"edf2dfa8-9e01-4421-aebb-92beec01d94f", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 29, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f94896dd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-556bea0d1e", ContainerID:"", Pod:"whisker-5f94896dd9-9mn8s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.15.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0461d037d10", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:29:02.130692 containerd[1620]: 2025-05-17 00:29:02.090 [INFO][4127] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.65/32] ContainerID="2a07c1e4f1785b1b9034a771773e04b8123623deecac354a07f821efbb13364b" Namespace="calico-system" Pod="whisker-5f94896dd9-9mn8s" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-whisker--5f94896dd9--9mn8s-eth0" May 17 00:29:02.130692 containerd[1620]: 2025-05-17 00:29:02.090 [INFO][4127] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0461d037d10 ContainerID="2a07c1e4f1785b1b9034a771773e04b8123623deecac354a07f821efbb13364b" Namespace="calico-system" Pod="whisker-5f94896dd9-9mn8s" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-whisker--5f94896dd9--9mn8s-eth0" May 17 00:29:02.130692 containerd[1620]: 2025-05-17 00:29:02.107 [INFO][4127] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a07c1e4f1785b1b9034a771773e04b8123623deecac354a07f821efbb13364b" Namespace="calico-system" Pod="whisker-5f94896dd9-9mn8s" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-whisker--5f94896dd9--9mn8s-eth0" May 17 00:29:02.130692 containerd[1620]: 2025-05-17 00:29:02.109 [INFO][4127] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a07c1e4f1785b1b9034a771773e04b8123623deecac354a07f821efbb13364b" Namespace="calico-system" Pod="whisker-5f94896dd9-9mn8s" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-whisker--5f94896dd9--9mn8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--556bea0d1e-k8s-whisker--5f94896dd9--9mn8s-eth0", GenerateName:"whisker-5f94896dd9-", Namespace:"calico-system", SelfLink:"", UID:"edf2dfa8-9e01-4421-aebb-92beec01d94f", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 29, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f94896dd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-556bea0d1e", ContainerID:"2a07c1e4f1785b1b9034a771773e04b8123623deecac354a07f821efbb13364b", Pod:"whisker-5f94896dd9-9mn8s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.15.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0461d037d10", MAC:"de:4b:5f:f9:1f:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:29:02.130692 containerd[1620]: 2025-05-17 00:29:02.125 [INFO][4127] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a07c1e4f1785b1b9034a771773e04b8123623deecac354a07f821efbb13364b" Namespace="calico-system" Pod="whisker-5f94896dd9-9mn8s" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-whisker--5f94896dd9--9mn8s-eth0" May 17 00:29:02.195496 containerd[1620]: time="2025-05-17T00:29:02.195340842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:29:02.195967 containerd[1620]: time="2025-05-17T00:29:02.195840923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:29:02.197475 containerd[1620]: time="2025-05-17T00:29:02.195923777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:29:02.197475 containerd[1620]: time="2025-05-17T00:29:02.197384733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:29:02.315756 containerd[1620]: time="2025-05-17T00:29:02.314956124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f94896dd9-9mn8s,Uid:edf2dfa8-9e01-4421-aebb-92beec01d94f,Namespace:calico-system,Attempt:0,} returns sandbox id \"2a07c1e4f1785b1b9034a771773e04b8123623deecac354a07f821efbb13364b\"" May 17 00:29:02.317944 containerd[1620]: time="2025-05-17T00:29:02.317906527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:29:02.365720 kubelet[2929]: I0517 00:29:02.365524 2929 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ffb55fb-0336-49f4-8e59-ee32d13fa830" path="/var/lib/kubelet/pods/0ffb55fb-0336-49f4-8e59-ee32d13fa830/volumes" May 17 00:29:02.483052 kernel: bpftool[4318]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 17 00:29:02.589179 kubelet[2929]: I0517 00:29:02.589036 2929 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:29:02.626068 containerd[1620]: time="2025-05-17T00:29:02.625902424Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:29:02.628120 containerd[1620]: time="2025-05-17T00:29:02.627822852Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:29:02.628120 containerd[1620]: time="2025-05-17T00:29:02.627867596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:29:02.628610 kubelet[2929]: E0517 00:29:02.628474 2929 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:29:02.631482 kubelet[2929]: E0517 00:29:02.630616 2929 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:29:02.636970 kubelet[2929]: E0517 00:29:02.636920 2929 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c907cd8c04324328b30a0d9a2949a437,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-67frm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f94896dd9-9mn8s_calico-system(edf2dfa8-9e01-4421-aebb-92beec01d94f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:29:02.641681 containerd[1620]: time="2025-05-17T00:29:02.641649069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:29:02.684535 systemd-networkd[1251]: vxlan.calico: Link UP May 17 00:29:02.684543 systemd-networkd[1251]: vxlan.calico: Gained carrier May 17 00:29:02.942739 containerd[1620]: time="2025-05-17T00:29:02.942641147Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:29:02.943921 containerd[1620]: time="2025-05-17T00:29:02.943857632Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:29:02.943979 containerd[1620]: time="2025-05-17T00:29:02.943952491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:29:02.944220 kubelet[2929]: E0517 00:29:02.944177 2929 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:29:02.944491 kubelet[2929]: E0517 00:29:02.944229 2929 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:29:02.945056 kubelet[2929]: E0517 00:29:02.944326 2929 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-67frm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f94896dd9-9mn8s_calico-system(edf2dfa8-9e01-4421-aebb-92beec01d94f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:29:02.954128 kubelet[2929]: E0517 00:29:02.954068 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5f94896dd9-9mn8s" podUID="edf2dfa8-9e01-4421-aebb-92beec01d94f" May 17 00:29:03.263357 systemd-resolved[1509]: Under memory pressure, flushing caches. May 17 00:29:03.266918 systemd-journald[1178]: Under memory pressure, flushing caches. May 17 00:29:03.263380 systemd-resolved[1509]: Flushed all caches. May 17 00:29:03.591373 kubelet[2929]: E0517 00:29:03.591321 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f94896dd9-9mn8s" podUID="edf2dfa8-9e01-4421-aebb-92beec01d94f" May 17 00:29:03.839447 systemd-networkd[1251]: cali0461d037d10: Gained IPv6LL May 17 00:29:04.543251 systemd-networkd[1251]: vxlan.calico: Gained IPv6LL May 17 00:29:07.353330 kubelet[2929]: I0517 00:29:07.353286 2929 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:29:08.363395 containerd[1620]: time="2025-05-17T00:29:08.363158147Z" level=info msg="StopPodSandbox for \"78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023\"" May 17 00:29:08.459778 containerd[1620]: 2025-05-17 00:29:08.425 [INFO][4461] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" May 17 00:29:08.459778 containerd[1620]: 2025-05-17 00:29:08.425 [INFO][4461] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" iface="eth0" netns="/var/run/netns/cni-5d4e0124-5abd-c086-7120-eac4eab753f3" May 17 00:29:08.459778 containerd[1620]: 2025-05-17 00:29:08.427 [INFO][4461] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" iface="eth0" netns="/var/run/netns/cni-5d4e0124-5abd-c086-7120-eac4eab753f3" May 17 00:29:08.459778 containerd[1620]: 2025-05-17 00:29:08.427 [INFO][4461] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" iface="eth0" netns="/var/run/netns/cni-5d4e0124-5abd-c086-7120-eac4eab753f3" May 17 00:29:08.459778 containerd[1620]: 2025-05-17 00:29:08.427 [INFO][4461] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" May 17 00:29:08.459778 containerd[1620]: 2025-05-17 00:29:08.429 [INFO][4461] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" May 17 00:29:08.459778 containerd[1620]: 2025-05-17 00:29:08.448 [INFO][4469] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" HandleID="k8s-pod-network.78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" Workload="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--2xxwm-eth0" May 17 00:29:08.459778 containerd[1620]: 2025-05-17 00:29:08.448 [INFO][4469] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:29:08.459778 containerd[1620]: 2025-05-17 00:29:08.449 [INFO][4469] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:29:08.459778 containerd[1620]: 2025-05-17 00:29:08.454 [WARNING][4469] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" HandleID="k8s-pod-network.78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" Workload="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--2xxwm-eth0" May 17 00:29:08.459778 containerd[1620]: 2025-05-17 00:29:08.454 [INFO][4469] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" HandleID="k8s-pod-network.78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" Workload="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--2xxwm-eth0" May 17 00:29:08.459778 containerd[1620]: 2025-05-17 00:29:08.455 [INFO][4469] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:29:08.459778 containerd[1620]: 2025-05-17 00:29:08.457 [INFO][4461] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" May 17 00:29:08.462411 containerd[1620]: time="2025-05-17T00:29:08.460873965Z" level=info msg="TearDown network for sandbox \"78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023\" successfully" May 17 00:29:08.462411 containerd[1620]: time="2025-05-17T00:29:08.462098385Z" level=info msg="StopPodSandbox for \"78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023\" returns successfully" May 17 00:29:08.463514 systemd[1]: run-netns-cni\x2d5d4e0124\x2d5abd\x2dc086\x2d7120\x2deac4eab753f3.mount: Deactivated successfully. May 17 00:29:08.464902 containerd[1620]: time="2025-05-17T00:29:08.463663075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2xxwm,Uid:cea38a36-7e2f-400d-bcde-dfc6cb61506d,Namespace:kube-system,Attempt:1,}" May 17 00:29:08.574775 systemd-networkd[1251]: cali38d828d66ed: Link UP May 17 00:29:08.575091 systemd-networkd[1251]: cali38d828d66ed: Gained carrier May 17 00:29:08.589767 containerd[1620]: 2025-05-17 00:29:08.511 [INFO][4475] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--2xxwm-eth0 coredns-7c65d6cfc9- kube-system cea38a36-7e2f-400d-bcde-dfc6cb61506d 899 0 2025-05-17 00:28:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-n-556bea0d1e coredns-7c65d6cfc9-2xxwm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali38d828d66ed [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="246d2c806ff3ec5e1b65e8a39d58555db65965d6d1989ef05c90c4668d248707" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2xxwm" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--2xxwm-" May 17 00:29:08.589767 containerd[1620]: 2025-05-17 00:29:08.511 [INFO][4475] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="246d2c806ff3ec5e1b65e8a39d58555db65965d6d1989ef05c90c4668d248707" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2xxwm" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--2xxwm-eth0" May 17 00:29:08.589767 containerd[1620]: 2025-05-17 00:29:08.531 [INFO][4487] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="246d2c806ff3ec5e1b65e8a39d58555db65965d6d1989ef05c90c4668d248707" HandleID="k8s-pod-network.246d2c806ff3ec5e1b65e8a39d58555db65965d6d1989ef05c90c4668d248707" Workload="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--2xxwm-eth0" May 17 00:29:08.589767 containerd[1620]: 2025-05-17 00:29:08.531 [INFO][4487] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="246d2c806ff3ec5e1b65e8a39d58555db65965d6d1989ef05c90c4668d248707" HandleID="k8s-pod-network.246d2c806ff3ec5e1b65e8a39d58555db65965d6d1989ef05c90c4668d248707" Workload="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--2xxwm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf020), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-n-556bea0d1e", "pod":"coredns-7c65d6cfc9-2xxwm", "timestamp":"2025-05-17 00:29:08.531723706 +0000 UTC"}, Hostname:"ci-4081-3-3-n-556bea0d1e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:29:08.589767 containerd[1620]: 2025-05-17 00:29:08.532 [INFO][4487] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:29:08.589767 containerd[1620]: 2025-05-17 00:29:08.532 [INFO][4487] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:29:08.589767 containerd[1620]: 2025-05-17 00:29:08.532 [INFO][4487] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-556bea0d1e' May 17 00:29:08.589767 containerd[1620]: 2025-05-17 00:29:08.537 [INFO][4487] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.246d2c806ff3ec5e1b65e8a39d58555db65965d6d1989ef05c90c4668d248707" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:08.589767 containerd[1620]: 2025-05-17 00:29:08.542 [INFO][4487] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:08.589767 containerd[1620]: 2025-05-17 00:29:08.547 [INFO][4487] ipam/ipam.go 511: Trying affinity for 192.168.15.64/26 host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:08.589767 containerd[1620]: 2025-05-17 00:29:08.549 [INFO][4487] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.64/26 host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:08.589767 containerd[1620]: 2025-05-17 00:29:08.551 [INFO][4487] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.64/26 host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:08.589767 containerd[1620]: 2025-05-17 00:29:08.551 [INFO][4487] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.64/26 handle="k8s-pod-network.246d2c806ff3ec5e1b65e8a39d58555db65965d6d1989ef05c90c4668d248707" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:08.589767 containerd[1620]: 2025-05-17 00:29:08.553 [INFO][4487] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.246d2c806ff3ec5e1b65e8a39d58555db65965d6d1989ef05c90c4668d248707 May 17 00:29:08.589767 containerd[1620]: 2025-05-17 00:29:08.560 [INFO][4487] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.64/26 handle="k8s-pod-network.246d2c806ff3ec5e1b65e8a39d58555db65965d6d1989ef05c90c4668d248707" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:08.589767 containerd[1620]: 2025-05-17 00:29:08.564 [INFO][4487] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.66/26] block=192.168.15.64/26 handle="k8s-pod-network.246d2c806ff3ec5e1b65e8a39d58555db65965d6d1989ef05c90c4668d248707" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:08.589767 containerd[1620]: 2025-05-17 00:29:08.564 [INFO][4487] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.66/26] handle="k8s-pod-network.246d2c806ff3ec5e1b65e8a39d58555db65965d6d1989ef05c90c4668d248707" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:08.589767 containerd[1620]: 2025-05-17 00:29:08.564 [INFO][4487] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:29:08.589767 containerd[1620]: 2025-05-17 00:29:08.564 [INFO][4487] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.66/26] IPv6=[] ContainerID="246d2c806ff3ec5e1b65e8a39d58555db65965d6d1989ef05c90c4668d248707" HandleID="k8s-pod-network.246d2c806ff3ec5e1b65e8a39d58555db65965d6d1989ef05c90c4668d248707" Workload="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--2xxwm-eth0" May 17 00:29:08.594140 containerd[1620]: 2025-05-17 00:29:08.568 [INFO][4475] cni-plugin/k8s.go 418: Populated endpoint ContainerID="246d2c806ff3ec5e1b65e8a39d58555db65965d6d1989ef05c90c4668d248707" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2xxwm" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--2xxwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--2xxwm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cea38a36-7e2f-400d-bcde-dfc6cb61506d", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 28, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-556bea0d1e", ContainerID:"", Pod:"coredns-7c65d6cfc9-2xxwm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali38d828d66ed", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:29:08.594140 containerd[1620]: 2025-05-17 00:29:08.568 [INFO][4475] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.66/32] ContainerID="246d2c806ff3ec5e1b65e8a39d58555db65965d6d1989ef05c90c4668d248707" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2xxwm" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--2xxwm-eth0" May 17 00:29:08.594140 containerd[1620]: 2025-05-17 00:29:08.568 [INFO][4475] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali38d828d66ed ContainerID="246d2c806ff3ec5e1b65e8a39d58555db65965d6d1989ef05c90c4668d248707" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2xxwm" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--2xxwm-eth0" May 17 00:29:08.594140 containerd[1620]: 2025-05-17 00:29:08.574 [INFO][4475] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="246d2c806ff3ec5e1b65e8a39d58555db65965d6d1989ef05c90c4668d248707" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2xxwm" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--2xxwm-eth0" May 17 00:29:08.594140 containerd[1620]: 2025-05-17 00:29:08.574 [INFO][4475] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="246d2c806ff3ec5e1b65e8a39d58555db65965d6d1989ef05c90c4668d248707" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2xxwm" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--2xxwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--2xxwm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cea38a36-7e2f-400d-bcde-dfc6cb61506d", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 28, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-556bea0d1e", ContainerID:"246d2c806ff3ec5e1b65e8a39d58555db65965d6d1989ef05c90c4668d248707", Pod:"coredns-7c65d6cfc9-2xxwm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali38d828d66ed", MAC:"42:79:cb:7b:9b:83", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:29:08.594140 containerd[1620]: 2025-05-17 00:29:08.585 [INFO][4475] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="246d2c806ff3ec5e1b65e8a39d58555db65965d6d1989ef05c90c4668d248707" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2xxwm" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--2xxwm-eth0" May 17 00:29:08.610051 containerd[1620]: time="2025-05-17T00:29:08.609823897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:29:08.610051 containerd[1620]: time="2025-05-17T00:29:08.609913815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:29:08.610693 containerd[1620]: time="2025-05-17T00:29:08.610204161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:29:08.611080 containerd[1620]: time="2025-05-17T00:29:08.610309820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:29:08.708257 containerd[1620]: time="2025-05-17T00:29:08.708224521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2xxwm,Uid:cea38a36-7e2f-400d-bcde-dfc6cb61506d,Namespace:kube-system,Attempt:1,} returns sandbox id \"246d2c806ff3ec5e1b65e8a39d58555db65965d6d1989ef05c90c4668d248707\"" May 17 00:29:08.710699 containerd[1620]: time="2025-05-17T00:29:08.710489827Z" level=info msg="CreateContainer within sandbox \"246d2c806ff3ec5e1b65e8a39d58555db65965d6d1989ef05c90c4668d248707\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:29:08.734806 containerd[1620]: time="2025-05-17T00:29:08.734765077Z" level=info msg="CreateContainer within sandbox \"246d2c806ff3ec5e1b65e8a39d58555db65965d6d1989ef05c90c4668d248707\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7731994fbcaae806e1893ac2296cbc960bcf16ef7e53bf9aea6c69dacc957d51\"" May 17 00:29:08.736722 containerd[1620]: time="2025-05-17T00:29:08.736696036Z" level=info msg="StartContainer for \"7731994fbcaae806e1893ac2296cbc960bcf16ef7e53bf9aea6c69dacc957d51\"" May 17 00:29:08.850969 containerd[1620]: time="2025-05-17T00:29:08.850926927Z" level=info msg="StartContainer for \"7731994fbcaae806e1893ac2296cbc960bcf16ef7e53bf9aea6c69dacc957d51\" returns successfully" May 17 00:29:09.362806 containerd[1620]: time="2025-05-17T00:29:09.362757821Z" level=info msg="StopPodSandbox for \"6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4\"" May 17 00:29:09.363240 containerd[1620]: time="2025-05-17T00:29:09.363174143Z" level=info msg="StopPodSandbox for \"fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc\"" May 17 00:29:09.466337 systemd[1]: run-containerd-runc-k8s.io-246d2c806ff3ec5e1b65e8a39d58555db65965d6d1989ef05c90c4668d248707-runc.yS9Pvd.mount: Deactivated successfully. May 17 00:29:09.515598 containerd[1620]: 2025-05-17 00:29:09.448 [INFO][4604] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" May 17 00:29:09.515598 containerd[1620]: 2025-05-17 00:29:09.448 [INFO][4604] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" iface="eth0" netns="/var/run/netns/cni-b853232d-aee9-49c1-ee12-5f77b18c771e" May 17 00:29:09.515598 containerd[1620]: 2025-05-17 00:29:09.448 [INFO][4604] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" iface="eth0" netns="/var/run/netns/cni-b853232d-aee9-49c1-ee12-5f77b18c771e" May 17 00:29:09.515598 containerd[1620]: 2025-05-17 00:29:09.448 [INFO][4604] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" iface="eth0" netns="/var/run/netns/cni-b853232d-aee9-49c1-ee12-5f77b18c771e" May 17 00:29:09.515598 containerd[1620]: 2025-05-17 00:29:09.448 [INFO][4604] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" May 17 00:29:09.515598 containerd[1620]: 2025-05-17 00:29:09.449 [INFO][4604] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" May 17 00:29:09.515598 containerd[1620]: 2025-05-17 00:29:09.498 [INFO][4616] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" HandleID="k8s-pod-network.6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" Workload="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--ppcxd-eth0" May 17 00:29:09.515598 containerd[1620]: 2025-05-17 00:29:09.499 [INFO][4616] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:29:09.515598 containerd[1620]: 2025-05-17 00:29:09.499 [INFO][4616] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:29:09.515598 containerd[1620]: 2025-05-17 00:29:09.508 [WARNING][4616] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" HandleID="k8s-pod-network.6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" Workload="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--ppcxd-eth0" May 17 00:29:09.515598 containerd[1620]: 2025-05-17 00:29:09.508 [INFO][4616] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" HandleID="k8s-pod-network.6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" Workload="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--ppcxd-eth0" May 17 00:29:09.515598 containerd[1620]: 2025-05-17 00:29:09.510 [INFO][4616] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:29:09.515598 containerd[1620]: 2025-05-17 00:29:09.512 [INFO][4604] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" May 17 00:29:09.518842 containerd[1620]: time="2025-05-17T00:29:09.516445886Z" level=info msg="TearDown network for sandbox \"6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4\" successfully" May 17 00:29:09.518842 containerd[1620]: time="2025-05-17T00:29:09.516472806Z" level=info msg="StopPodSandbox for \"6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4\" returns successfully" May 17 00:29:09.519175 systemd[1]: run-netns-cni\x2db853232d\x2daee9\x2d49c1\x2dee12\x2d5f77b18c771e.mount: Deactivated successfully. May 17 00:29:09.520448 containerd[1620]: time="2025-05-17T00:29:09.520416274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ppcxd,Uid:2834e46b-4ada-426a-b1e7-b513f359ad04,Namespace:kube-system,Attempt:1,}" May 17 00:29:09.526899 containerd[1620]: 2025-05-17 00:29:09.451 [INFO][4603] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" May 17 00:29:09.526899 containerd[1620]: 2025-05-17 00:29:09.454 [INFO][4603] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" iface="eth0" netns="/var/run/netns/cni-8ba02484-6407-8e97-d469-535aaed079e7" May 17 00:29:09.526899 containerd[1620]: 2025-05-17 00:29:09.454 [INFO][4603] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" iface="eth0" netns="/var/run/netns/cni-8ba02484-6407-8e97-d469-535aaed079e7" May 17 00:29:09.526899 containerd[1620]: 2025-05-17 00:29:09.454 [INFO][4603] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" iface="eth0" netns="/var/run/netns/cni-8ba02484-6407-8e97-d469-535aaed079e7" May 17 00:29:09.526899 containerd[1620]: 2025-05-17 00:29:09.455 [INFO][4603] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" May 17 00:29:09.526899 containerd[1620]: 2025-05-17 00:29:09.455 [INFO][4603] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" May 17 00:29:09.526899 containerd[1620]: 2025-05-17 00:29:09.506 [INFO][4621] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" HandleID="k8s-pod-network.fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" Workload="ci--4081--3--3--n--556bea0d1e-k8s-goldmane--8f77d7b6c--7b9h8-eth0" May 17 00:29:09.526899 containerd[1620]: 2025-05-17 00:29:09.507 [INFO][4621] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:29:09.526899 containerd[1620]: 2025-05-17 00:29:09.510 [INFO][4621] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:29:09.526899 containerd[1620]: 2025-05-17 00:29:09.518 [WARNING][4621] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" HandleID="k8s-pod-network.fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" Workload="ci--4081--3--3--n--556bea0d1e-k8s-goldmane--8f77d7b6c--7b9h8-eth0" May 17 00:29:09.526899 containerd[1620]: 2025-05-17 00:29:09.520 [INFO][4621] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" HandleID="k8s-pod-network.fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" Workload="ci--4081--3--3--n--556bea0d1e-k8s-goldmane--8f77d7b6c--7b9h8-eth0" May 17 00:29:09.526899 containerd[1620]: 2025-05-17 00:29:09.523 [INFO][4621] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:29:09.526899 containerd[1620]: 2025-05-17 00:29:09.524 [INFO][4603] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" May 17 00:29:09.531267 containerd[1620]: time="2025-05-17T00:29:09.527316485Z" level=info msg="TearDown network for sandbox \"fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc\" successfully" May 17 00:29:09.531267 containerd[1620]: time="2025-05-17T00:29:09.527336944Z" level=info msg="StopPodSandbox for \"fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc\" returns successfully" May 17 00:29:09.531267 containerd[1620]: time="2025-05-17T00:29:09.527805905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-7b9h8,Uid:fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef,Namespace:calico-system,Attempt:1,}" May 17 00:29:09.532337 systemd[1]: run-netns-cni\x2d8ba02484\x2d6407\x2d8e97\x2dd469\x2d535aaed079e7.mount: Deactivated successfully. May 17 00:29:09.648205 kubelet[2929]: I0517 00:29:09.647054 2929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-2xxwm" podStartSLOduration=32.646847849 podStartE2EDuration="32.646847849s" podCreationTimestamp="2025-05-17 00:28:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:29:09.646266216 +0000 UTC m=+39.368696531" watchObservedRunningTime="2025-05-17 00:29:09.646847849 +0000 UTC m=+39.369278164" May 17 00:29:09.714694 systemd-networkd[1251]: califf3e24f3903: Link UP May 17 00:29:09.721232 systemd-networkd[1251]: califf3e24f3903: Gained carrier May 17 00:29:09.739182 containerd[1620]: 2025-05-17 00:29:09.592 [INFO][4630] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--ppcxd-eth0 coredns-7c65d6cfc9- kube-system 2834e46b-4ada-426a-b1e7-b513f359ad04 910 0 2025-05-17 00:28:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-n-556bea0d1e coredns-7c65d6cfc9-ppcxd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califf3e24f3903 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8902633197f01b25657dbd90992e4ba79dfb64a7b45cf717991eb6044632ae15" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ppcxd" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--ppcxd-" May 17 00:29:09.739182 containerd[1620]: 2025-05-17 00:29:09.592 [INFO][4630] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8902633197f01b25657dbd90992e4ba79dfb64a7b45cf717991eb6044632ae15" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ppcxd" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--ppcxd-eth0" May 17 00:29:09.739182 containerd[1620]: 2025-05-17 00:29:09.634 [INFO][4654] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8902633197f01b25657dbd90992e4ba79dfb64a7b45cf717991eb6044632ae15" HandleID="k8s-pod-network.8902633197f01b25657dbd90992e4ba79dfb64a7b45cf717991eb6044632ae15" Workload="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--ppcxd-eth0" May 17 00:29:09.739182 containerd[1620]: 2025-05-17 00:29:09.634 [INFO][4654] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8902633197f01b25657dbd90992e4ba79dfb64a7b45cf717991eb6044632ae15" HandleID="k8s-pod-network.8902633197f01b25657dbd90992e4ba79dfb64a7b45cf717991eb6044632ae15" Workload="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--ppcxd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad4d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-n-556bea0d1e", "pod":"coredns-7c65d6cfc9-ppcxd", "timestamp":"2025-05-17 00:29:09.634775682 +0000 UTC"}, Hostname:"ci-4081-3-3-n-556bea0d1e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:29:09.739182 containerd[1620]: 2025-05-17 00:29:09.634 [INFO][4654] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:29:09.739182 containerd[1620]: 2025-05-17 00:29:09.634 [INFO][4654] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:29:09.739182 containerd[1620]: 2025-05-17 00:29:09.635 [INFO][4654] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-556bea0d1e' May 17 00:29:09.739182 containerd[1620]: 2025-05-17 00:29:09.661 [INFO][4654] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8902633197f01b25657dbd90992e4ba79dfb64a7b45cf717991eb6044632ae15" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:09.739182 containerd[1620]: 2025-05-17 00:29:09.678 [INFO][4654] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:09.739182 containerd[1620]: 2025-05-17 00:29:09.683 [INFO][4654] ipam/ipam.go 511: Trying affinity for 192.168.15.64/26 host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:09.739182 containerd[1620]: 2025-05-17 00:29:09.686 [INFO][4654] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.64/26 host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:09.739182 containerd[1620]: 2025-05-17 00:29:09.690 [INFO][4654] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.64/26 host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:09.739182 containerd[1620]: 2025-05-17 00:29:09.690 [INFO][4654] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.64/26 handle="k8s-pod-network.8902633197f01b25657dbd90992e4ba79dfb64a7b45cf717991eb6044632ae15" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:09.739182 containerd[1620]: 2025-05-17 00:29:09.692 [INFO][4654] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8902633197f01b25657dbd90992e4ba79dfb64a7b45cf717991eb6044632ae15 May 17 00:29:09.739182 containerd[1620]: 2025-05-17 00:29:09.699 [INFO][4654] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.64/26 handle="k8s-pod-network.8902633197f01b25657dbd90992e4ba79dfb64a7b45cf717991eb6044632ae15" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:09.739182 containerd[1620]: 2025-05-17 00:29:09.706 [INFO][4654] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.67/26] block=192.168.15.64/26 handle="k8s-pod-network.8902633197f01b25657dbd90992e4ba79dfb64a7b45cf717991eb6044632ae15" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:09.739182 containerd[1620]: 2025-05-17 00:29:09.706 [INFO][4654] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.67/26] handle="k8s-pod-network.8902633197f01b25657dbd90992e4ba79dfb64a7b45cf717991eb6044632ae15" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:09.739182 containerd[1620]: 2025-05-17 00:29:09.706 [INFO][4654] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:29:09.739182 containerd[1620]: 2025-05-17 00:29:09.706 [INFO][4654] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.67/26] IPv6=[] ContainerID="8902633197f01b25657dbd90992e4ba79dfb64a7b45cf717991eb6044632ae15" HandleID="k8s-pod-network.8902633197f01b25657dbd90992e4ba79dfb64a7b45cf717991eb6044632ae15" Workload="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--ppcxd-eth0" May 17 00:29:09.742098 containerd[1620]: 2025-05-17 00:29:09.711 [INFO][4630] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8902633197f01b25657dbd90992e4ba79dfb64a7b45cf717991eb6044632ae15" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ppcxd" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--ppcxd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--ppcxd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2834e46b-4ada-426a-b1e7-b513f359ad04", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 28, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-556bea0d1e", ContainerID:"", Pod:"coredns-7c65d6cfc9-ppcxd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf3e24f3903", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:29:09.742098 containerd[1620]: 2025-05-17 00:29:09.711 [INFO][4630] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.67/32] ContainerID="8902633197f01b25657dbd90992e4ba79dfb64a7b45cf717991eb6044632ae15" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ppcxd" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--ppcxd-eth0" May 17 00:29:09.742098 containerd[1620]: 2025-05-17 00:29:09.711 [INFO][4630] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf3e24f3903 ContainerID="8902633197f01b25657dbd90992e4ba79dfb64a7b45cf717991eb6044632ae15" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ppcxd" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--ppcxd-eth0" May 17 00:29:09.742098 containerd[1620]: 2025-05-17 00:29:09.716 [INFO][4630] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8902633197f01b25657dbd90992e4ba79dfb64a7b45cf717991eb6044632ae15" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ppcxd" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--ppcxd-eth0" May 17 00:29:09.742098 containerd[1620]: 2025-05-17 00:29:09.717 [INFO][4630] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8902633197f01b25657dbd90992e4ba79dfb64a7b45cf717991eb6044632ae15" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ppcxd" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--ppcxd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--ppcxd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2834e46b-4ada-426a-b1e7-b513f359ad04", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 28, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-556bea0d1e", ContainerID:"8902633197f01b25657dbd90992e4ba79dfb64a7b45cf717991eb6044632ae15", Pod:"coredns-7c65d6cfc9-ppcxd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf3e24f3903", MAC:"a2:3e:6b:8e:56:d2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:29:09.742098 containerd[1620]: 2025-05-17 00:29:09.737 [INFO][4630] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8902633197f01b25657dbd90992e4ba79dfb64a7b45cf717991eb6044632ae15" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ppcxd" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--ppcxd-eth0" May 17 00:29:09.762475 containerd[1620]: time="2025-05-17T00:29:09.762272077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:29:09.762475 containerd[1620]: time="2025-05-17T00:29:09.762325608Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:29:09.762475 containerd[1620]: time="2025-05-17T00:29:09.762338131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:29:09.762475 containerd[1620]: time="2025-05-17T00:29:09.762412150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:29:09.846686 containerd[1620]: time="2025-05-17T00:29:09.846657721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ppcxd,Uid:2834e46b-4ada-426a-b1e7-b513f359ad04,Namespace:kube-system,Attempt:1,} returns sandbox id \"8902633197f01b25657dbd90992e4ba79dfb64a7b45cf717991eb6044632ae15\"" May 17 00:29:09.857052 containerd[1620]: time="2025-05-17T00:29:09.854509099Z" level=info msg="CreateContainer within sandbox \"8902633197f01b25657dbd90992e4ba79dfb64a7b45cf717991eb6044632ae15\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:29:09.868951 systemd-networkd[1251]: cali722291eb0f7: Link UP May 17 00:29:09.870230 systemd-networkd[1251]: cali722291eb0f7: Gained carrier May 17 00:29:09.899156 containerd[1620]: 2025-05-17 00:29:09.608 [INFO][4640] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--556bea0d1e-k8s-goldmane--8f77d7b6c--7b9h8-eth0 goldmane-8f77d7b6c- calico-system fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef 911 0 2025-05-17 00:28:47 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:8f77d7b6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-3-n-556bea0d1e goldmane-8f77d7b6c-7b9h8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali722291eb0f7 [] [] }} ContainerID="64af4c07793a9d80ce13c4ae3eb0255c34e1379d164d8fb60a0b476865d766f9" Namespace="calico-system" Pod="goldmane-8f77d7b6c-7b9h8" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-goldmane--8f77d7b6c--7b9h8-" May 17 00:29:09.899156 containerd[1620]: 2025-05-17 00:29:09.608 [INFO][4640] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="64af4c07793a9d80ce13c4ae3eb0255c34e1379d164d8fb60a0b476865d766f9" Namespace="calico-system" Pod="goldmane-8f77d7b6c-7b9h8" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-goldmane--8f77d7b6c--7b9h8-eth0" May 17 00:29:09.899156 containerd[1620]: 2025-05-17 00:29:09.664 [INFO][4659] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="64af4c07793a9d80ce13c4ae3eb0255c34e1379d164d8fb60a0b476865d766f9" HandleID="k8s-pod-network.64af4c07793a9d80ce13c4ae3eb0255c34e1379d164d8fb60a0b476865d766f9" Workload="ci--4081--3--3--n--556bea0d1e-k8s-goldmane--8f77d7b6c--7b9h8-eth0" May 17 00:29:09.899156 containerd[1620]: 2025-05-17 00:29:09.664 [INFO][4659] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="64af4c07793a9d80ce13c4ae3eb0255c34e1379d164d8fb60a0b476865d766f9" HandleID="k8s-pod-network.64af4c07793a9d80ce13c4ae3eb0255c34e1379d164d8fb60a0b476865d766f9" Workload="ci--4081--3--3--n--556bea0d1e-k8s-goldmane--8f77d7b6c--7b9h8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000326cb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-556bea0d1e", "pod":"goldmane-8f77d7b6c-7b9h8", "timestamp":"2025-05-17 00:29:09.663943331 +0000 UTC"}, Hostname:"ci-4081-3-3-n-556bea0d1e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:29:09.899156 containerd[1620]: 2025-05-17 00:29:09.664 [INFO][4659] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:29:09.899156 containerd[1620]: 2025-05-17 00:29:09.706 [INFO][4659] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:29:09.899156 containerd[1620]: 2025-05-17 00:29:09.706 [INFO][4659] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-556bea0d1e' May 17 00:29:09.899156 containerd[1620]: 2025-05-17 00:29:09.761 [INFO][4659] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.64af4c07793a9d80ce13c4ae3eb0255c34e1379d164d8fb60a0b476865d766f9" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:09.899156 containerd[1620]: 2025-05-17 00:29:09.777 [INFO][4659] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:09.899156 containerd[1620]: 2025-05-17 00:29:09.784 [INFO][4659] ipam/ipam.go 511: Trying affinity for 192.168.15.64/26 host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:09.899156 containerd[1620]: 2025-05-17 00:29:09.786 [INFO][4659] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.64/26 host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:09.899156 containerd[1620]: 2025-05-17 00:29:09.788 [INFO][4659] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.64/26 host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:09.899156 containerd[1620]: 2025-05-17 00:29:09.788 [INFO][4659] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.64/26 handle="k8s-pod-network.64af4c07793a9d80ce13c4ae3eb0255c34e1379d164d8fb60a0b476865d766f9" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:09.899156 containerd[1620]: 2025-05-17 00:29:09.790 [INFO][4659] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.64af4c07793a9d80ce13c4ae3eb0255c34e1379d164d8fb60a0b476865d766f9 May 17 00:29:09.899156 containerd[1620]: 2025-05-17 00:29:09.806 [INFO][4659] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.64/26 handle="k8s-pod-network.64af4c07793a9d80ce13c4ae3eb0255c34e1379d164d8fb60a0b476865d766f9" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:09.899156 containerd[1620]: 2025-05-17 00:29:09.827 [INFO][4659] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.68/26] block=192.168.15.64/26 handle="k8s-pod-network.64af4c07793a9d80ce13c4ae3eb0255c34e1379d164d8fb60a0b476865d766f9" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:09.899156 containerd[1620]: 2025-05-17 00:29:09.828 [INFO][4659] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.68/26] handle="k8s-pod-network.64af4c07793a9d80ce13c4ae3eb0255c34e1379d164d8fb60a0b476865d766f9" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:09.899156 containerd[1620]: 2025-05-17 00:29:09.829 [INFO][4659] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:29:09.899156 containerd[1620]: 2025-05-17 00:29:09.829 [INFO][4659] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.68/26] IPv6=[] ContainerID="64af4c07793a9d80ce13c4ae3eb0255c34e1379d164d8fb60a0b476865d766f9" HandleID="k8s-pod-network.64af4c07793a9d80ce13c4ae3eb0255c34e1379d164d8fb60a0b476865d766f9" Workload="ci--4081--3--3--n--556bea0d1e-k8s-goldmane--8f77d7b6c--7b9h8-eth0" May 17 00:29:09.901734 containerd[1620]: 2025-05-17 00:29:09.838 [INFO][4640] cni-plugin/k8s.go 418: Populated endpoint ContainerID="64af4c07793a9d80ce13c4ae3eb0255c34e1379d164d8fb60a0b476865d766f9" Namespace="calico-system" Pod="goldmane-8f77d7b6c-7b9h8" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-goldmane--8f77d7b6c--7b9h8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--556bea0d1e-k8s-goldmane--8f77d7b6c--7b9h8-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-556bea0d1e", ContainerID:"", Pod:"goldmane-8f77d7b6c-7b9h8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.15.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali722291eb0f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:29:09.901734 containerd[1620]: 2025-05-17 00:29:09.840 [INFO][4640] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.68/32] ContainerID="64af4c07793a9d80ce13c4ae3eb0255c34e1379d164d8fb60a0b476865d766f9" Namespace="calico-system" Pod="goldmane-8f77d7b6c-7b9h8" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-goldmane--8f77d7b6c--7b9h8-eth0" May 17 00:29:09.901734 containerd[1620]: 2025-05-17 00:29:09.840 [INFO][4640] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali722291eb0f7 ContainerID="64af4c07793a9d80ce13c4ae3eb0255c34e1379d164d8fb60a0b476865d766f9" Namespace="calico-system" Pod="goldmane-8f77d7b6c-7b9h8" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-goldmane--8f77d7b6c--7b9h8-eth0" May 17 00:29:09.901734 containerd[1620]: 2025-05-17 00:29:09.871 [INFO][4640] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="64af4c07793a9d80ce13c4ae3eb0255c34e1379d164d8fb60a0b476865d766f9" Namespace="calico-system" Pod="goldmane-8f77d7b6c-7b9h8" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-goldmane--8f77d7b6c--7b9h8-eth0" May 17 00:29:09.901734 containerd[1620]: 2025-05-17 00:29:09.871 [INFO][4640] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="64af4c07793a9d80ce13c4ae3eb0255c34e1379d164d8fb60a0b476865d766f9" Namespace="calico-system" Pod="goldmane-8f77d7b6c-7b9h8" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-goldmane--8f77d7b6c--7b9h8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--556bea0d1e-k8s-goldmane--8f77d7b6c--7b9h8-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-556bea0d1e", ContainerID:"64af4c07793a9d80ce13c4ae3eb0255c34e1379d164d8fb60a0b476865d766f9", Pod:"goldmane-8f77d7b6c-7b9h8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.15.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali722291eb0f7", MAC:"8a:0c:97:20:bb:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:29:09.901734 containerd[1620]: 2025-05-17 00:29:09.892 [INFO][4640] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="64af4c07793a9d80ce13c4ae3eb0255c34e1379d164d8fb60a0b476865d766f9" Namespace="calico-system" Pod="goldmane-8f77d7b6c-7b9h8" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-goldmane--8f77d7b6c--7b9h8-eth0" May 17 00:29:09.906229 containerd[1620]: time="2025-05-17T00:29:09.906190309Z" level=info msg="CreateContainer within sandbox \"8902633197f01b25657dbd90992e4ba79dfb64a7b45cf717991eb6044632ae15\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a44ef40e7893b334d25feb0dfad1010bf7f1f459b34eeee65c4d6233b026369e\"" May 17 00:29:09.908228 containerd[1620]: time="2025-05-17T00:29:09.908199544Z" level=info msg="StartContainer for \"a44ef40e7893b334d25feb0dfad1010bf7f1f459b34eeee65c4d6233b026369e\"" May 17 00:29:09.968806 containerd[1620]: time="2025-05-17T00:29:09.968486751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:29:09.968806 containerd[1620]: time="2025-05-17T00:29:09.968649756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:29:09.969862 containerd[1620]: time="2025-05-17T00:29:09.968672660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:29:09.969862 containerd[1620]: time="2025-05-17T00:29:09.969763608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:29:10.008612 containerd[1620]: time="2025-05-17T00:29:10.008462873Z" level=info msg="StartContainer for \"a44ef40e7893b334d25feb0dfad1010bf7f1f459b34eeee65c4d6233b026369e\" returns successfully" May 17 00:29:10.088582 containerd[1620]: time="2025-05-17T00:29:10.088548286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-7b9h8,Uid:fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef,Namespace:calico-system,Attempt:1,} returns sandbox id \"64af4c07793a9d80ce13c4ae3eb0255c34e1379d164d8fb60a0b476865d766f9\"" May 17 00:29:10.091524 containerd[1620]: time="2025-05-17T00:29:10.091370057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:29:10.368505 containerd[1620]: time="2025-05-17T00:29:10.368230072Z" level=info msg="StopPodSandbox for \"70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9\"" May 17 00:29:10.373423 containerd[1620]: time="2025-05-17T00:29:10.371535490Z" level=info msg="StopPodSandbox for \"ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06\"" May 17 00:29:10.409057 containerd[1620]: time="2025-05-17T00:29:10.407740708Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:29:10.417391 containerd[1620]: time="2025-05-17T00:29:10.417344499Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:29:10.417473 containerd[1620]: time="2025-05-17T00:29:10.417441570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:29:10.419325 kubelet[2929]: E0517 00:29:10.419167 2929 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:29:10.419446 kubelet[2929]: E0517 00:29:10.419338 2929 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:29:10.421043 kubelet[2929]: E0517 00:29:10.419456 2929 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4r67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-7b9h8_calico-system(fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:29:10.426101 kubelet[2929]: E0517 00:29:10.426067 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-7b9h8" podUID="fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef" May 17 00:29:10.495338 systemd-networkd[1251]: cali38d828d66ed: Gained IPv6LL May 17 00:29:10.584051 containerd[1620]: 2025-05-17 00:29:10.526 [INFO][4823] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" May 17 00:29:10.584051 containerd[1620]: 2025-05-17 00:29:10.527 [INFO][4823] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" iface="eth0" netns="/var/run/netns/cni-4eb492fe-88b0-b091-f602-3b08a75e8e66" May 17 00:29:10.584051 containerd[1620]: 2025-05-17 00:29:10.527 [INFO][4823] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" iface="eth0" netns="/var/run/netns/cni-4eb492fe-88b0-b091-f602-3b08a75e8e66" May 17 00:29:10.584051 containerd[1620]: 2025-05-17 00:29:10.528 [INFO][4823] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" iface="eth0" netns="/var/run/netns/cni-4eb492fe-88b0-b091-f602-3b08a75e8e66" May 17 00:29:10.584051 containerd[1620]: 2025-05-17 00:29:10.528 [INFO][4823] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" May 17 00:29:10.584051 containerd[1620]: 2025-05-17 00:29:10.529 [INFO][4823] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" May 17 00:29:10.584051 containerd[1620]: 2025-05-17 00:29:10.568 [INFO][4841] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" HandleID="k8s-pod-network.70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--82zlb-eth0" May 17 00:29:10.584051 containerd[1620]: 2025-05-17 00:29:10.568 [INFO][4841] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:29:10.584051 containerd[1620]: 2025-05-17 00:29:10.568 [INFO][4841] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:29:10.584051 containerd[1620]: 2025-05-17 00:29:10.575 [WARNING][4841] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" HandleID="k8s-pod-network.70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--82zlb-eth0" May 17 00:29:10.584051 containerd[1620]: 2025-05-17 00:29:10.575 [INFO][4841] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" HandleID="k8s-pod-network.70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--82zlb-eth0" May 17 00:29:10.584051 containerd[1620]: 2025-05-17 00:29:10.577 [INFO][4841] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:29:10.584051 containerd[1620]: 2025-05-17 00:29:10.579 [INFO][4823] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" May 17 00:29:10.588829 containerd[1620]: time="2025-05-17T00:29:10.586784446Z" level=info msg="TearDown network for sandbox \"70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9\" successfully" May 17 00:29:10.588829 containerd[1620]: time="2025-05-17T00:29:10.586809873Z" level=info msg="StopPodSandbox for \"70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9\" returns successfully" May 17 00:29:10.588829 containerd[1620]: time="2025-05-17T00:29:10.587620135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65b5bd8c4b-82zlb,Uid:0af51042-daa9-4020-979a-c14dc1a38805,Namespace:calico-apiserver,Attempt:1,}" May 17 00:29:10.588470 systemd[1]: run-netns-cni\x2d4eb492fe\x2d88b0\x2db091\x2df602\x2d3b08a75e8e66.mount: Deactivated successfully. May 17 00:29:10.599939 containerd[1620]: 2025-05-17 00:29:10.530 [INFO][4831] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" May 17 00:29:10.599939 containerd[1620]: 2025-05-17 00:29:10.531 [INFO][4831] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" iface="eth0" netns="/var/run/netns/cni-feeba7bc-e2ad-3aa7-f114-635d9a7e48ca" May 17 00:29:10.599939 containerd[1620]: 2025-05-17 00:29:10.532 [INFO][4831] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" iface="eth0" netns="/var/run/netns/cni-feeba7bc-e2ad-3aa7-f114-635d9a7e48ca" May 17 00:29:10.599939 containerd[1620]: 2025-05-17 00:29:10.533 [INFO][4831] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" iface="eth0" netns="/var/run/netns/cni-feeba7bc-e2ad-3aa7-f114-635d9a7e48ca" May 17 00:29:10.599939 containerd[1620]: 2025-05-17 00:29:10.533 [INFO][4831] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" May 17 00:29:10.599939 containerd[1620]: 2025-05-17 00:29:10.533 [INFO][4831] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" May 17 00:29:10.599939 containerd[1620]: 2025-05-17 00:29:10.572 [INFO][4843] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" HandleID="k8s-pod-network.ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--kube--controllers--6b98696cc--8lr55-eth0" May 17 00:29:10.599939 containerd[1620]: 2025-05-17 00:29:10.572 [INFO][4843] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:29:10.599939 containerd[1620]: 2025-05-17 00:29:10.577 [INFO][4843] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:29:10.599939 containerd[1620]: 2025-05-17 00:29:10.591 [WARNING][4843] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" HandleID="k8s-pod-network.ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--kube--controllers--6b98696cc--8lr55-eth0" May 17 00:29:10.599939 containerd[1620]: 2025-05-17 00:29:10.592 [INFO][4843] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" HandleID="k8s-pod-network.ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--kube--controllers--6b98696cc--8lr55-eth0" May 17 00:29:10.599939 containerd[1620]: 2025-05-17 00:29:10.594 [INFO][4843] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:29:10.599939 containerd[1620]: 2025-05-17 00:29:10.595 [INFO][4831] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" May 17 00:29:10.599939 containerd[1620]: time="2025-05-17T00:29:10.598369317Z" level=info msg="TearDown network for sandbox \"ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06\" successfully" May 17 00:29:10.599939 containerd[1620]: time="2025-05-17T00:29:10.598383332Z" level=info msg="StopPodSandbox for \"ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06\" returns successfully" May 17 00:29:10.599939 containerd[1620]: time="2025-05-17T00:29:10.598787642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b98696cc-8lr55,Uid:0cc79bb9-7a42-43e7-a121-7181de14309d,Namespace:calico-system,Attempt:1,}" May 17 00:29:10.602537 systemd[1]: run-netns-cni\x2dfeeba7bc\x2de2ad\x2d3aa7\x2df114\x2d635d9a7e48ca.mount: Deactivated successfully. May 17 00:29:10.664465 kubelet[2929]: E0517 00:29:10.663439 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7b9h8" podUID="fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef" May 17 00:29:10.714416 kubelet[2929]: I0517 00:29:10.713411 2929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-ppcxd" podStartSLOduration=33.71339423 podStartE2EDuration="33.71339423s" podCreationTimestamp="2025-05-17 00:28:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:29:10.69072045 +0000 UTC m=+40.413150765" watchObservedRunningTime="2025-05-17 00:29:10.71339423 +0000 UTC m=+40.435824546" May 17 00:29:10.822002 systemd-networkd[1251]: calieb2b7c53c50: Link UP May 17 00:29:10.822378 systemd-networkd[1251]: calieb2b7c53c50: Gained carrier May 17 00:29:10.845409 containerd[1620]: 2025-05-17 00:29:10.668 [INFO][4854] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--82zlb-eth0 calico-apiserver-65b5bd8c4b- calico-apiserver 0af51042-daa9-4020-979a-c14dc1a38805 937 0 2025-05-17 00:28:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65b5bd8c4b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-n-556bea0d1e calico-apiserver-65b5bd8c4b-82zlb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calieb2b7c53c50 [] [] }} ContainerID="6d0ed38d28b1c5e90843b8d74415a0cd25936fa24e711be030722178a77ad4a4" Namespace="calico-apiserver" Pod="calico-apiserver-65b5bd8c4b-82zlb" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--82zlb-" May 17 00:29:10.845409 containerd[1620]: 2025-05-17 00:29:10.670 [INFO][4854] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d0ed38d28b1c5e90843b8d74415a0cd25936fa24e711be030722178a77ad4a4" Namespace="calico-apiserver" Pod="calico-apiserver-65b5bd8c4b-82zlb" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--82zlb-eth0" May 17 00:29:10.845409 containerd[1620]: 2025-05-17 00:29:10.769 [INFO][4877] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d0ed38d28b1c5e90843b8d74415a0cd25936fa24e711be030722178a77ad4a4" HandleID="k8s-pod-network.6d0ed38d28b1c5e90843b8d74415a0cd25936fa24e711be030722178a77ad4a4" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--82zlb-eth0" May 17 00:29:10.845409 containerd[1620]: 2025-05-17 00:29:10.769 [INFO][4877] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6d0ed38d28b1c5e90843b8d74415a0cd25936fa24e711be030722178a77ad4a4" HandleID="k8s-pod-network.6d0ed38d28b1c5e90843b8d74415a0cd25936fa24e711be030722178a77ad4a4" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--82zlb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9d80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-n-556bea0d1e", "pod":"calico-apiserver-65b5bd8c4b-82zlb", "timestamp":"2025-05-17 00:29:10.769104406 +0000 UTC"}, Hostname:"ci-4081-3-3-n-556bea0d1e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:29:10.845409 containerd[1620]: 2025-05-17 00:29:10.769 [INFO][4877] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:29:10.845409 containerd[1620]: 2025-05-17 00:29:10.769 [INFO][4877] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:29:10.845409 containerd[1620]: 2025-05-17 00:29:10.770 [INFO][4877] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-556bea0d1e' May 17 00:29:10.845409 containerd[1620]: 2025-05-17 00:29:10.781 [INFO][4877] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6d0ed38d28b1c5e90843b8d74415a0cd25936fa24e711be030722178a77ad4a4" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:10.845409 containerd[1620]: 2025-05-17 00:29:10.790 [INFO][4877] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:10.845409 containerd[1620]: 2025-05-17 00:29:10.794 [INFO][4877] ipam/ipam.go 511: Trying affinity for 192.168.15.64/26 host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:10.845409 containerd[1620]: 2025-05-17 00:29:10.796 [INFO][4877] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.64/26 host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:10.845409 containerd[1620]: 2025-05-17 00:29:10.798 [INFO][4877] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.64/26 host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:10.845409 containerd[1620]: 2025-05-17 00:29:10.798 [INFO][4877] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.64/26 handle="k8s-pod-network.6d0ed38d28b1c5e90843b8d74415a0cd25936fa24e711be030722178a77ad4a4" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:10.845409 containerd[1620]: 2025-05-17 00:29:10.800 [INFO][4877] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6d0ed38d28b1c5e90843b8d74415a0cd25936fa24e711be030722178a77ad4a4 May 17 00:29:10.845409 containerd[1620]: 2025-05-17 00:29:10.804 [INFO][4877] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.64/26 handle="k8s-pod-network.6d0ed38d28b1c5e90843b8d74415a0cd25936fa24e711be030722178a77ad4a4" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:10.845409 containerd[1620]: 2025-05-17 00:29:10.810 [INFO][4877] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.69/26] block=192.168.15.64/26 handle="k8s-pod-network.6d0ed38d28b1c5e90843b8d74415a0cd25936fa24e711be030722178a77ad4a4" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:10.845409 containerd[1620]: 2025-05-17 00:29:10.810 [INFO][4877] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.69/26] handle="k8s-pod-network.6d0ed38d28b1c5e90843b8d74415a0cd25936fa24e711be030722178a77ad4a4" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:10.845409 containerd[1620]: 2025-05-17 00:29:10.810 [INFO][4877] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:29:10.845409 containerd[1620]: 2025-05-17 00:29:10.810 [INFO][4877] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.69/26] IPv6=[] ContainerID="6d0ed38d28b1c5e90843b8d74415a0cd25936fa24e711be030722178a77ad4a4" HandleID="k8s-pod-network.6d0ed38d28b1c5e90843b8d74415a0cd25936fa24e711be030722178a77ad4a4" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--82zlb-eth0" May 17 00:29:10.847951 containerd[1620]: 2025-05-17 00:29:10.812 [INFO][4854] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d0ed38d28b1c5e90843b8d74415a0cd25936fa24e711be030722178a77ad4a4" Namespace="calico-apiserver" Pod="calico-apiserver-65b5bd8c4b-82zlb" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--82zlb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--82zlb-eth0", GenerateName:"calico-apiserver-65b5bd8c4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"0af51042-daa9-4020-979a-c14dc1a38805", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 28, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65b5bd8c4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-556bea0d1e", ContainerID:"", Pod:"calico-apiserver-65b5bd8c4b-82zlb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb2b7c53c50", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:29:10.847951 containerd[1620]: 2025-05-17 00:29:10.812 [INFO][4854] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.69/32] ContainerID="6d0ed38d28b1c5e90843b8d74415a0cd25936fa24e711be030722178a77ad4a4" Namespace="calico-apiserver" Pod="calico-apiserver-65b5bd8c4b-82zlb" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--82zlb-eth0" May 17 00:29:10.847951 containerd[1620]: 2025-05-17 00:29:10.812 [INFO][4854] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb2b7c53c50 ContainerID="6d0ed38d28b1c5e90843b8d74415a0cd25936fa24e711be030722178a77ad4a4" Namespace="calico-apiserver" Pod="calico-apiserver-65b5bd8c4b-82zlb" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--82zlb-eth0" May 17 00:29:10.847951 containerd[1620]: 2025-05-17 00:29:10.824 [INFO][4854] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d0ed38d28b1c5e90843b8d74415a0cd25936fa24e711be030722178a77ad4a4" Namespace="calico-apiserver" Pod="calico-apiserver-65b5bd8c4b-82zlb" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--82zlb-eth0" May 17 00:29:10.847951 containerd[1620]: 2025-05-17 00:29:10.824 [INFO][4854] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d0ed38d28b1c5e90843b8d74415a0cd25936fa24e711be030722178a77ad4a4" Namespace="calico-apiserver" Pod="calico-apiserver-65b5bd8c4b-82zlb" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--82zlb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--82zlb-eth0", GenerateName:"calico-apiserver-65b5bd8c4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"0af51042-daa9-4020-979a-c14dc1a38805", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 28, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65b5bd8c4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-556bea0d1e", ContainerID:"6d0ed38d28b1c5e90843b8d74415a0cd25936fa24e711be030722178a77ad4a4", Pod:"calico-apiserver-65b5bd8c4b-82zlb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb2b7c53c50", MAC:"f6:f1:92:86:a6:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:29:10.847951 containerd[1620]: 2025-05-17 00:29:10.839 [INFO][4854] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d0ed38d28b1c5e90843b8d74415a0cd25936fa24e711be030722178a77ad4a4" Namespace="calico-apiserver" Pod="calico-apiserver-65b5bd8c4b-82zlb" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--82zlb-eth0" May 17 00:29:10.867562 containerd[1620]: time="2025-05-17T00:29:10.867277014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:29:10.867562 containerd[1620]: time="2025-05-17T00:29:10.867326297Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:29:10.867562 containerd[1620]: time="2025-05-17T00:29:10.867348689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:29:10.867562 containerd[1620]: time="2025-05-17T00:29:10.867463515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:29:10.930701 systemd-networkd[1251]: calid6fb3df06e9: Link UP May 17 00:29:10.931988 systemd-networkd[1251]: calid6fb3df06e9: Gained carrier May 17 00:29:10.954397 containerd[1620]: 2025-05-17 00:29:10.721 [INFO][4864] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--556bea0d1e-k8s-calico--kube--controllers--6b98696cc--8lr55-eth0 calico-kube-controllers-6b98696cc- calico-system 0cc79bb9-7a42-43e7-a121-7181de14309d 938 0 2025-05-17 00:28:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6b98696cc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-3-n-556bea0d1e calico-kube-controllers-6b98696cc-8lr55 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid6fb3df06e9 [] [] }} ContainerID="00802f22242d0b6835f3c4d3ea09c626019189767b275913db895be51a551da5" Namespace="calico-system" Pod="calico-kube-controllers-6b98696cc-8lr55" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-calico--kube--controllers--6b98696cc--8lr55-" May 17 00:29:10.954397 containerd[1620]: 2025-05-17 00:29:10.723 [INFO][4864] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="00802f22242d0b6835f3c4d3ea09c626019189767b275913db895be51a551da5" Namespace="calico-system" Pod="calico-kube-controllers-6b98696cc-8lr55" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-calico--kube--controllers--6b98696cc--8lr55-eth0" May 17 00:29:10.954397 containerd[1620]: 2025-05-17 00:29:10.788 [INFO][4886] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="00802f22242d0b6835f3c4d3ea09c626019189767b275913db895be51a551da5" HandleID="k8s-pod-network.00802f22242d0b6835f3c4d3ea09c626019189767b275913db895be51a551da5" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--kube--controllers--6b98696cc--8lr55-eth0" May 17 00:29:10.954397 containerd[1620]: 2025-05-17 00:29:10.788 [INFO][4886] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="00802f22242d0b6835f3c4d3ea09c626019189767b275913db895be51a551da5" HandleID="k8s-pod-network.00802f22242d0b6835f3c4d3ea09c626019189767b275913db895be51a551da5" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--kube--controllers--6b98696cc--8lr55-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f7d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-556bea0d1e", "pod":"calico-kube-controllers-6b98696cc-8lr55", "timestamp":"2025-05-17 00:29:10.78827657 +0000 UTC"}, Hostname:"ci-4081-3-3-n-556bea0d1e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:29:10.954397 containerd[1620]: 2025-05-17 00:29:10.788 [INFO][4886] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:29:10.954397 containerd[1620]: 2025-05-17 00:29:10.810 [INFO][4886] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:29:10.954397 containerd[1620]: 2025-05-17 00:29:10.810 [INFO][4886] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-556bea0d1e' May 17 00:29:10.954397 containerd[1620]: 2025-05-17 00:29:10.879 [INFO][4886] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.00802f22242d0b6835f3c4d3ea09c626019189767b275913db895be51a551da5" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:10.954397 containerd[1620]: 2025-05-17 00:29:10.891 [INFO][4886] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:10.954397 containerd[1620]: 2025-05-17 00:29:10.897 [INFO][4886] ipam/ipam.go 511: Trying affinity for 192.168.15.64/26 host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:10.954397 containerd[1620]: 2025-05-17 00:29:10.900 [INFO][4886] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.64/26 host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:10.954397 containerd[1620]: 2025-05-17 00:29:10.902 [INFO][4886] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.64/26 host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:10.954397 containerd[1620]: 2025-05-17 00:29:10.902 [INFO][4886] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.64/26 handle="k8s-pod-network.00802f22242d0b6835f3c4d3ea09c626019189767b275913db895be51a551da5" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:10.954397 containerd[1620]: 2025-05-17 00:29:10.905 [INFO][4886] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.00802f22242d0b6835f3c4d3ea09c626019189767b275913db895be51a551da5 May 17 00:29:10.954397 containerd[1620]: 2025-05-17 00:29:10.909 [INFO][4886] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.64/26 handle="k8s-pod-network.00802f22242d0b6835f3c4d3ea09c626019189767b275913db895be51a551da5" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:10.954397 containerd[1620]: 2025-05-17 00:29:10.920 [INFO][4886] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.70/26] block=192.168.15.64/26 handle="k8s-pod-network.00802f22242d0b6835f3c4d3ea09c626019189767b275913db895be51a551da5" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:10.954397 containerd[1620]: 2025-05-17 00:29:10.920 [INFO][4886] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.70/26] handle="k8s-pod-network.00802f22242d0b6835f3c4d3ea09c626019189767b275913db895be51a551da5" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:10.954397 containerd[1620]: 2025-05-17 00:29:10.920 [INFO][4886] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:29:10.954397 containerd[1620]: 2025-05-17 00:29:10.920 [INFO][4886] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.70/26] IPv6=[] ContainerID="00802f22242d0b6835f3c4d3ea09c626019189767b275913db895be51a551da5" HandleID="k8s-pod-network.00802f22242d0b6835f3c4d3ea09c626019189767b275913db895be51a551da5" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--kube--controllers--6b98696cc--8lr55-eth0" May 17 00:29:10.955643 containerd[1620]: 2025-05-17 00:29:10.924 [INFO][4864] cni-plugin/k8s.go 418: Populated endpoint ContainerID="00802f22242d0b6835f3c4d3ea09c626019189767b275913db895be51a551da5" Namespace="calico-system" Pod="calico-kube-controllers-6b98696cc-8lr55" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-calico--kube--controllers--6b98696cc--8lr55-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--556bea0d1e-k8s-calico--kube--controllers--6b98696cc--8lr55-eth0", GenerateName:"calico-kube-controllers-6b98696cc-", Namespace:"calico-system", SelfLink:"", UID:"0cc79bb9-7a42-43e7-a121-7181de14309d", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b98696cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-556bea0d1e", ContainerID:"", Pod:"calico-kube-controllers-6b98696cc-8lr55", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.15.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid6fb3df06e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:29:10.955643 containerd[1620]: 2025-05-17 00:29:10.925 [INFO][4864] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.70/32] ContainerID="00802f22242d0b6835f3c4d3ea09c626019189767b275913db895be51a551da5" Namespace="calico-system" Pod="calico-kube-controllers-6b98696cc-8lr55" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-calico--kube--controllers--6b98696cc--8lr55-eth0" May 17 00:29:10.955643 containerd[1620]: 2025-05-17 00:29:10.925 [INFO][4864] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid6fb3df06e9 ContainerID="00802f22242d0b6835f3c4d3ea09c626019189767b275913db895be51a551da5" Namespace="calico-system" Pod="calico-kube-controllers-6b98696cc-8lr55" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-calico--kube--controllers--6b98696cc--8lr55-eth0" May 17 00:29:10.955643 containerd[1620]: 2025-05-17 00:29:10.932 [INFO][4864] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="00802f22242d0b6835f3c4d3ea09c626019189767b275913db895be51a551da5" Namespace="calico-system" Pod="calico-kube-controllers-6b98696cc-8lr55" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-calico--kube--controllers--6b98696cc--8lr55-eth0" May 17 00:29:10.955643 containerd[1620]: 2025-05-17 00:29:10.937 [INFO][4864] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="00802f22242d0b6835f3c4d3ea09c626019189767b275913db895be51a551da5" Namespace="calico-system" Pod="calico-kube-controllers-6b98696cc-8lr55" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-calico--kube--controllers--6b98696cc--8lr55-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--556bea0d1e-k8s-calico--kube--controllers--6b98696cc--8lr55-eth0", GenerateName:"calico-kube-controllers-6b98696cc-", Namespace:"calico-system", SelfLink:"", UID:"0cc79bb9-7a42-43e7-a121-7181de14309d", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b98696cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-556bea0d1e", ContainerID:"00802f22242d0b6835f3c4d3ea09c626019189767b275913db895be51a551da5", Pod:"calico-kube-controllers-6b98696cc-8lr55", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.15.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid6fb3df06e9", MAC:"ca:09:f5:40:2a:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:29:10.955643 containerd[1620]: 2025-05-17 00:29:10.951 [INFO][4864] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="00802f22242d0b6835f3c4d3ea09c626019189767b275913db895be51a551da5" Namespace="calico-system" Pod="calico-kube-controllers-6b98696cc-8lr55" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-calico--kube--controllers--6b98696cc--8lr55-eth0" May 17 00:29:10.960894 containerd[1620]: time="2025-05-17T00:29:10.960391651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65b5bd8c4b-82zlb,Uid:0af51042-daa9-4020-979a-c14dc1a38805,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6d0ed38d28b1c5e90843b8d74415a0cd25936fa24e711be030722178a77ad4a4\"" May 17 00:29:10.962438 containerd[1620]: time="2025-05-17T00:29:10.962423719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:29:10.978576 containerd[1620]: time="2025-05-17T00:29:10.978432960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:29:10.979102 containerd[1620]: time="2025-05-17T00:29:10.979009473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:29:10.979268 containerd[1620]: time="2025-05-17T00:29:10.979087289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:29:10.980168 containerd[1620]: time="2025-05-17T00:29:10.979372224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:29:11.027288 containerd[1620]: time="2025-05-17T00:29:11.027201035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b98696cc-8lr55,Uid:0cc79bb9-7a42-43e7-a121-7181de14309d,Namespace:calico-system,Attempt:1,} returns sandbox id \"00802f22242d0b6835f3c4d3ea09c626019189767b275913db895be51a551da5\"" May 17 00:29:11.362803 containerd[1620]: time="2025-05-17T00:29:11.362756831Z" level=info msg="StopPodSandbox for \"ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee\"" May 17 00:29:11.436558 containerd[1620]: 2025-05-17 00:29:11.401 [INFO][5001] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" May 17 00:29:11.436558 containerd[1620]: 2025-05-17 00:29:11.401 [INFO][5001] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" iface="eth0" netns="/var/run/netns/cni-d5c1bbeb-2251-0a60-89a9-c6776b57a82d" May 17 00:29:11.436558 containerd[1620]: 2025-05-17 00:29:11.402 [INFO][5001] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" iface="eth0" netns="/var/run/netns/cni-d5c1bbeb-2251-0a60-89a9-c6776b57a82d" May 17 00:29:11.436558 containerd[1620]: 2025-05-17 00:29:11.403 [INFO][5001] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" iface="eth0" netns="/var/run/netns/cni-d5c1bbeb-2251-0a60-89a9-c6776b57a82d" May 17 00:29:11.436558 containerd[1620]: 2025-05-17 00:29:11.403 [INFO][5001] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" May 17 00:29:11.436558 containerd[1620]: 2025-05-17 00:29:11.403 [INFO][5001] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" May 17 00:29:11.436558 containerd[1620]: 2025-05-17 00:29:11.421 [INFO][5008] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" HandleID="k8s-pod-network.ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" Workload="ci--4081--3--3--n--556bea0d1e-k8s-csi--node--driver--jv7dj-eth0" May 17 00:29:11.436558 containerd[1620]: 2025-05-17 00:29:11.422 [INFO][5008] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:29:11.436558 containerd[1620]: 2025-05-17 00:29:11.422 [INFO][5008] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:29:11.436558 containerd[1620]: 2025-05-17 00:29:11.427 [WARNING][5008] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" HandleID="k8s-pod-network.ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" Workload="ci--4081--3--3--n--556bea0d1e-k8s-csi--node--driver--jv7dj-eth0" May 17 00:29:11.436558 containerd[1620]: 2025-05-17 00:29:11.428 [INFO][5008] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" HandleID="k8s-pod-network.ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" Workload="ci--4081--3--3--n--556bea0d1e-k8s-csi--node--driver--jv7dj-eth0" May 17 00:29:11.436558 containerd[1620]: 2025-05-17 00:29:11.429 [INFO][5008] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:29:11.436558 containerd[1620]: 2025-05-17 00:29:11.432 [INFO][5001] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" May 17 00:29:11.436558 containerd[1620]: time="2025-05-17T00:29:11.436499587Z" level=info msg="TearDown network for sandbox \"ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee\" successfully" May 17 00:29:11.436558 containerd[1620]: time="2025-05-17T00:29:11.436523422Z" level=info msg="StopPodSandbox for \"ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee\" returns successfully" May 17 00:29:11.437815 containerd[1620]: time="2025-05-17T00:29:11.437496650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jv7dj,Uid:9def7614-7d88-42d1-ba99-91a4539e16ec,Namespace:calico-system,Attempt:1,}" May 17 00:29:11.477064 systemd[1]: run-netns-cni\x2dd5c1bbeb\x2d2251\x2d0a60\x2d89a9\x2dc6776b57a82d.mount: Deactivated successfully. May 17 00:29:11.583196 systemd-networkd[1251]: cali722291eb0f7: Gained IPv6LL May 17 00:29:11.635017 systemd-networkd[1251]: calif116710b53b: Link UP May 17 00:29:11.636747 systemd-networkd[1251]: calif116710b53b: Gained carrier May 17 00:29:11.653870 containerd[1620]: 2025-05-17 00:29:11.505 [INFO][5015] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--556bea0d1e-k8s-csi--node--driver--jv7dj-eth0 csi-node-driver- calico-system 9def7614-7d88-42d1-ba99-91a4539e16ec 962 0 2025-05-17 00:28:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:68bf44dd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-3-n-556bea0d1e csi-node-driver-jv7dj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif116710b53b [] [] }} ContainerID="5bcf70d4f950c80a6cd90f3dc99d75d569def02e4915496254e95f5335e22214" Namespace="calico-system" Pod="csi-node-driver-jv7dj" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-csi--node--driver--jv7dj-" May 17 00:29:11.653870 containerd[1620]: 2025-05-17 00:29:11.505 [INFO][5015] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5bcf70d4f950c80a6cd90f3dc99d75d569def02e4915496254e95f5335e22214" Namespace="calico-system" Pod="csi-node-driver-jv7dj" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-csi--node--driver--jv7dj-eth0" May 17 00:29:11.653870 containerd[1620]: 2025-05-17 00:29:11.559 [INFO][5027] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5bcf70d4f950c80a6cd90f3dc99d75d569def02e4915496254e95f5335e22214" HandleID="k8s-pod-network.5bcf70d4f950c80a6cd90f3dc99d75d569def02e4915496254e95f5335e22214" Workload="ci--4081--3--3--n--556bea0d1e-k8s-csi--node--driver--jv7dj-eth0" May 17 00:29:11.653870 containerd[1620]: 2025-05-17 00:29:11.562 [INFO][5027] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5bcf70d4f950c80a6cd90f3dc99d75d569def02e4915496254e95f5335e22214" HandleID="k8s-pod-network.5bcf70d4f950c80a6cd90f3dc99d75d569def02e4915496254e95f5335e22214" Workload="ci--4081--3--3--n--556bea0d1e-k8s-csi--node--driver--jv7dj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003aec50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-556bea0d1e", "pod":"csi-node-driver-jv7dj", "timestamp":"2025-05-17 00:29:11.559639599 +0000 UTC"}, Hostname:"ci-4081-3-3-n-556bea0d1e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:29:11.653870 containerd[1620]: 2025-05-17 00:29:11.562 [INFO][5027] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:29:11.653870 containerd[1620]: 2025-05-17 00:29:11.562 [INFO][5027] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:29:11.653870 containerd[1620]: 2025-05-17 00:29:11.562 [INFO][5027] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-556bea0d1e' May 17 00:29:11.653870 containerd[1620]: 2025-05-17 00:29:11.579 [INFO][5027] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5bcf70d4f950c80a6cd90f3dc99d75d569def02e4915496254e95f5335e22214" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:11.653870 containerd[1620]: 2025-05-17 00:29:11.603 [INFO][5027] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:11.653870 containerd[1620]: 2025-05-17 00:29:11.610 [INFO][5027] ipam/ipam.go 511: Trying affinity for 192.168.15.64/26 host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:11.653870 containerd[1620]: 2025-05-17 00:29:11.612 [INFO][5027] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.64/26 host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:11.653870 containerd[1620]: 2025-05-17 00:29:11.615 [INFO][5027] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.64/26 host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:11.653870 containerd[1620]: 2025-05-17 00:29:11.615 [INFO][5027] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.64/26 handle="k8s-pod-network.5bcf70d4f950c80a6cd90f3dc99d75d569def02e4915496254e95f5335e22214" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:11.653870 containerd[1620]: 2025-05-17 00:29:11.616 [INFO][5027] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5bcf70d4f950c80a6cd90f3dc99d75d569def02e4915496254e95f5335e22214 May 17 00:29:11.653870 containerd[1620]: 2025-05-17 00:29:11.622 [INFO][5027] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.64/26 handle="k8s-pod-network.5bcf70d4f950c80a6cd90f3dc99d75d569def02e4915496254e95f5335e22214" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:11.653870 containerd[1620]: 2025-05-17 00:29:11.628 [INFO][5027] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.71/26] block=192.168.15.64/26 handle="k8s-pod-network.5bcf70d4f950c80a6cd90f3dc99d75d569def02e4915496254e95f5335e22214" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:11.653870 containerd[1620]: 2025-05-17 00:29:11.628 [INFO][5027] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.71/26] handle="k8s-pod-network.5bcf70d4f950c80a6cd90f3dc99d75d569def02e4915496254e95f5335e22214" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:11.653870 containerd[1620]: 2025-05-17 00:29:11.628 [INFO][5027] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:29:11.653870 containerd[1620]: 2025-05-17 00:29:11.628 [INFO][5027] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.71/26] IPv6=[] ContainerID="5bcf70d4f950c80a6cd90f3dc99d75d569def02e4915496254e95f5335e22214" HandleID="k8s-pod-network.5bcf70d4f950c80a6cd90f3dc99d75d569def02e4915496254e95f5335e22214" Workload="ci--4081--3--3--n--556bea0d1e-k8s-csi--node--driver--jv7dj-eth0" May 17 00:29:11.655278 containerd[1620]: 2025-05-17 00:29:11.631 [INFO][5015] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5bcf70d4f950c80a6cd90f3dc99d75d569def02e4915496254e95f5335e22214" Namespace="calico-system" Pod="csi-node-driver-jv7dj" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-csi--node--driver--jv7dj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--556bea0d1e-k8s-csi--node--driver--jv7dj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9def7614-7d88-42d1-ba99-91a4539e16ec", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-556bea0d1e", ContainerID:"", Pod:"csi-node-driver-jv7dj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.15.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif116710b53b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:29:11.655278 containerd[1620]: 2025-05-17 00:29:11.632 [INFO][5015] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.71/32] ContainerID="5bcf70d4f950c80a6cd90f3dc99d75d569def02e4915496254e95f5335e22214" Namespace="calico-system" Pod="csi-node-driver-jv7dj" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-csi--node--driver--jv7dj-eth0" May 17 00:29:11.655278 containerd[1620]: 2025-05-17 00:29:11.632 [INFO][5015] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif116710b53b ContainerID="5bcf70d4f950c80a6cd90f3dc99d75d569def02e4915496254e95f5335e22214" Namespace="calico-system" Pod="csi-node-driver-jv7dj" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-csi--node--driver--jv7dj-eth0" May 17 00:29:11.655278 containerd[1620]: 2025-05-17 00:29:11.637 [INFO][5015] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5bcf70d4f950c80a6cd90f3dc99d75d569def02e4915496254e95f5335e22214" Namespace="calico-system" Pod="csi-node-driver-jv7dj" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-csi--node--driver--jv7dj-eth0" May 17 00:29:11.655278 containerd[1620]: 2025-05-17 00:29:11.637 [INFO][5015] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5bcf70d4f950c80a6cd90f3dc99d75d569def02e4915496254e95f5335e22214" Namespace="calico-system" Pod="csi-node-driver-jv7dj" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-csi--node--driver--jv7dj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--556bea0d1e-k8s-csi--node--driver--jv7dj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9def7614-7d88-42d1-ba99-91a4539e16ec", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-556bea0d1e", ContainerID:"5bcf70d4f950c80a6cd90f3dc99d75d569def02e4915496254e95f5335e22214", Pod:"csi-node-driver-jv7dj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.15.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif116710b53b", MAC:"3e:b4:23:de:c9:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:29:11.655278 containerd[1620]: 2025-05-17 00:29:11.649 [INFO][5015] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5bcf70d4f950c80a6cd90f3dc99d75d569def02e4915496254e95f5335e22214" Namespace="calico-system" Pod="csi-node-driver-jv7dj" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-csi--node--driver--jv7dj-eth0" May 17 00:29:11.674438 kubelet[2929]: E0517 00:29:11.673783 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7b9h8" podUID="fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef" May 17 00:29:11.692839 containerd[1620]: time="2025-05-17T00:29:11.692465144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:29:11.692839 containerd[1620]: time="2025-05-17T00:29:11.692523383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:29:11.692839 containerd[1620]: time="2025-05-17T00:29:11.692552768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:29:11.693838 containerd[1620]: time="2025-05-17T00:29:11.693387156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:29:11.711204 systemd-networkd[1251]: califf3e24f3903: Gained IPv6LL May 17 00:29:11.742529 containerd[1620]: time="2025-05-17T00:29:11.742493295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jv7dj,Uid:9def7614-7d88-42d1-ba99-91a4539e16ec,Namespace:calico-system,Attempt:1,} returns sandbox id \"5bcf70d4f950c80a6cd90f3dc99d75d569def02e4915496254e95f5335e22214\"" May 17 00:29:12.363298 containerd[1620]: time="2025-05-17T00:29:12.362784584Z" level=info msg="StopPodSandbox for \"469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f\"" May 17 00:29:12.419932 systemd-networkd[1251]: calieb2b7c53c50: Gained IPv6LL May 17 00:29:12.474449 containerd[1620]: 2025-05-17 00:29:12.437 [INFO][5096] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" May 17 00:29:12.474449 containerd[1620]: 2025-05-17 00:29:12.437 [INFO][5096] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" iface="eth0" netns="/var/run/netns/cni-f01dec3c-1916-ddd4-307c-defd5e2fac50" May 17 00:29:12.474449 containerd[1620]: 2025-05-17 00:29:12.437 [INFO][5096] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" iface="eth0" netns="/var/run/netns/cni-f01dec3c-1916-ddd4-307c-defd5e2fac50" May 17 00:29:12.474449 containerd[1620]: 2025-05-17 00:29:12.438 [INFO][5096] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" iface="eth0" netns="/var/run/netns/cni-f01dec3c-1916-ddd4-307c-defd5e2fac50" May 17 00:29:12.474449 containerd[1620]: 2025-05-17 00:29:12.438 [INFO][5096] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" May 17 00:29:12.474449 containerd[1620]: 2025-05-17 00:29:12.438 [INFO][5096] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" May 17 00:29:12.474449 containerd[1620]: 2025-05-17 00:29:12.462 [INFO][5104] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" HandleID="k8s-pod-network.469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--9dd9g-eth0" May 17 00:29:12.474449 containerd[1620]: 2025-05-17 00:29:12.462 [INFO][5104] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:29:12.474449 containerd[1620]: 2025-05-17 00:29:12.462 [INFO][5104] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:29:12.474449 containerd[1620]: 2025-05-17 00:29:12.470 [WARNING][5104] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" HandleID="k8s-pod-network.469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--9dd9g-eth0" May 17 00:29:12.474449 containerd[1620]: 2025-05-17 00:29:12.470 [INFO][5104] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" HandleID="k8s-pod-network.469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--9dd9g-eth0" May 17 00:29:12.474449 containerd[1620]: 2025-05-17 00:29:12.471 [INFO][5104] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:29:12.474449 containerd[1620]: 2025-05-17 00:29:12.472 [INFO][5096] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" May 17 00:29:12.477945 containerd[1620]: time="2025-05-17T00:29:12.475124425Z" level=info msg="TearDown network for sandbox \"469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f\" successfully" May 17 00:29:12.477945 containerd[1620]: time="2025-05-17T00:29:12.475154011Z" level=info msg="StopPodSandbox for \"469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f\" returns successfully" May 17 00:29:12.477945 containerd[1620]: time="2025-05-17T00:29:12.475673616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65b5bd8c4b-9dd9g,Uid:62172615-a35b-40f8-8043-de4d70d023f5,Namespace:calico-apiserver,Attempt:1,}" May 17 00:29:12.480357 systemd[1]: run-netns-cni\x2df01dec3c\x2d1916\x2dddd4\x2d307c\x2ddefd5e2fac50.mount: Deactivated successfully. May 17 00:29:12.543251 systemd-networkd[1251]: calid6fb3df06e9: Gained IPv6LL May 17 00:29:12.588813 systemd-networkd[1251]: calic7051b56b4b: Link UP May 17 00:29:12.588956 systemd-networkd[1251]: calic7051b56b4b: Gained carrier May 17 00:29:12.602514 containerd[1620]: 2025-05-17 00:29:12.519 [INFO][5111] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--9dd9g-eth0 calico-apiserver-65b5bd8c4b- calico-apiserver 62172615-a35b-40f8-8043-de4d70d023f5 974 0 2025-05-17 00:28:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65b5bd8c4b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-n-556bea0d1e calico-apiserver-65b5bd8c4b-9dd9g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic7051b56b4b [] [] }} ContainerID="1e86c98fe4cd492a333dd0020c1156979727c796f710d412414d111e189b21a6" Namespace="calico-apiserver" Pod="calico-apiserver-65b5bd8c4b-9dd9g" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--9dd9g-" May 17 00:29:12.602514 containerd[1620]: 2025-05-17 00:29:12.520 [INFO][5111] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1e86c98fe4cd492a333dd0020c1156979727c796f710d412414d111e189b21a6" Namespace="calico-apiserver" Pod="calico-apiserver-65b5bd8c4b-9dd9g" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--9dd9g-eth0" May 17 00:29:12.602514 containerd[1620]: 2025-05-17 00:29:12.547 [INFO][5123] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e86c98fe4cd492a333dd0020c1156979727c796f710d412414d111e189b21a6" HandleID="k8s-pod-network.1e86c98fe4cd492a333dd0020c1156979727c796f710d412414d111e189b21a6" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--9dd9g-eth0" May 17 00:29:12.602514 containerd[1620]: 2025-05-17 00:29:12.548 [INFO][5123] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1e86c98fe4cd492a333dd0020c1156979727c796f710d412414d111e189b21a6" HandleID="k8s-pod-network.1e86c98fe4cd492a333dd0020c1156979727c796f710d412414d111e189b21a6" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--9dd9g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9750), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-n-556bea0d1e", "pod":"calico-apiserver-65b5bd8c4b-9dd9g", "timestamp":"2025-05-17 00:29:12.547819278 +0000 UTC"}, Hostname:"ci-4081-3-3-n-556bea0d1e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:29:12.602514 containerd[1620]: 2025-05-17 00:29:12.548 [INFO][5123] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:29:12.602514 containerd[1620]: 2025-05-17 00:29:12.548 [INFO][5123] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:29:12.602514 containerd[1620]: 2025-05-17 00:29:12.548 [INFO][5123] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-556bea0d1e' May 17 00:29:12.602514 containerd[1620]: 2025-05-17 00:29:12.554 [INFO][5123] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1e86c98fe4cd492a333dd0020c1156979727c796f710d412414d111e189b21a6" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:12.602514 containerd[1620]: 2025-05-17 00:29:12.559 [INFO][5123] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:12.602514 containerd[1620]: 2025-05-17 00:29:12.563 [INFO][5123] ipam/ipam.go 511: Trying affinity for 192.168.15.64/26 host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:12.602514 containerd[1620]: 2025-05-17 00:29:12.565 [INFO][5123] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.64/26 host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:12.602514 containerd[1620]: 2025-05-17 00:29:12.567 [INFO][5123] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.64/26 host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:12.602514 containerd[1620]: 2025-05-17 00:29:12.567 [INFO][5123] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.64/26 handle="k8s-pod-network.1e86c98fe4cd492a333dd0020c1156979727c796f710d412414d111e189b21a6" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:12.602514 containerd[1620]: 2025-05-17 00:29:12.569 [INFO][5123] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1e86c98fe4cd492a333dd0020c1156979727c796f710d412414d111e189b21a6 May 17 00:29:12.602514 containerd[1620]: 2025-05-17 00:29:12.572 [INFO][5123] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.64/26 handle="k8s-pod-network.1e86c98fe4cd492a333dd0020c1156979727c796f710d412414d111e189b21a6" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:12.602514 containerd[1620]: 2025-05-17 00:29:12.580 [INFO][5123] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.72/26] block=192.168.15.64/26 handle="k8s-pod-network.1e86c98fe4cd492a333dd0020c1156979727c796f710d412414d111e189b21a6" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:12.602514 containerd[1620]: 2025-05-17 00:29:12.580 [INFO][5123] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.72/26] handle="k8s-pod-network.1e86c98fe4cd492a333dd0020c1156979727c796f710d412414d111e189b21a6" host="ci-4081-3-3-n-556bea0d1e" May 17 00:29:12.602514 containerd[1620]: 2025-05-17 00:29:12.580 [INFO][5123] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:29:12.602514 containerd[1620]: 2025-05-17 00:29:12.580 [INFO][5123] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.72/26] IPv6=[] ContainerID="1e86c98fe4cd492a333dd0020c1156979727c796f710d412414d111e189b21a6" HandleID="k8s-pod-network.1e86c98fe4cd492a333dd0020c1156979727c796f710d412414d111e189b21a6" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--9dd9g-eth0" May 17 00:29:12.603475 containerd[1620]: 2025-05-17 00:29:12.585 [INFO][5111] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1e86c98fe4cd492a333dd0020c1156979727c796f710d412414d111e189b21a6" Namespace="calico-apiserver" Pod="calico-apiserver-65b5bd8c4b-9dd9g" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--9dd9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--9dd9g-eth0", GenerateName:"calico-apiserver-65b5bd8c4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"62172615-a35b-40f8-8043-de4d70d023f5", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 28, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65b5bd8c4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-556bea0d1e", ContainerID:"", Pod:"calico-apiserver-65b5bd8c4b-9dd9g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7051b56b4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:29:12.603475 containerd[1620]: 2025-05-17 00:29:12.585 [INFO][5111] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.72/32] ContainerID="1e86c98fe4cd492a333dd0020c1156979727c796f710d412414d111e189b21a6" Namespace="calico-apiserver" Pod="calico-apiserver-65b5bd8c4b-9dd9g" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--9dd9g-eth0" May 17 00:29:12.603475 containerd[1620]: 2025-05-17 00:29:12.585 [INFO][5111] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic7051b56b4b ContainerID="1e86c98fe4cd492a333dd0020c1156979727c796f710d412414d111e189b21a6" Namespace="calico-apiserver" Pod="calico-apiserver-65b5bd8c4b-9dd9g" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--9dd9g-eth0" May 17 00:29:12.603475 containerd[1620]: 2025-05-17 00:29:12.588 [INFO][5111] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e86c98fe4cd492a333dd0020c1156979727c796f710d412414d111e189b21a6" Namespace="calico-apiserver" Pod="calico-apiserver-65b5bd8c4b-9dd9g" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--9dd9g-eth0" May 17 00:29:12.603475 containerd[1620]: 2025-05-17 00:29:12.589 [INFO][5111] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1e86c98fe4cd492a333dd0020c1156979727c796f710d412414d111e189b21a6" Namespace="calico-apiserver" Pod="calico-apiserver-65b5bd8c4b-9dd9g" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--9dd9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--9dd9g-eth0", GenerateName:"calico-apiserver-65b5bd8c4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"62172615-a35b-40f8-8043-de4d70d023f5", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 28, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65b5bd8c4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-556bea0d1e", ContainerID:"1e86c98fe4cd492a333dd0020c1156979727c796f710d412414d111e189b21a6", Pod:"calico-apiserver-65b5bd8c4b-9dd9g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7051b56b4b", MAC:"82:64:f1:da:f7:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:29:12.603475 containerd[1620]: 2025-05-17 00:29:12.598 [INFO][5111] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1e86c98fe4cd492a333dd0020c1156979727c796f710d412414d111e189b21a6" Namespace="calico-apiserver" Pod="calico-apiserver-65b5bd8c4b-9dd9g" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--9dd9g-eth0" May 17 00:29:12.619872 containerd[1620]: time="2025-05-17T00:29:12.619712708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:29:12.619992 containerd[1620]: time="2025-05-17T00:29:12.619913325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:29:12.619992 containerd[1620]: time="2025-05-17T00:29:12.619952619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:29:12.620135 containerd[1620]: time="2025-05-17T00:29:12.620092051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:29:12.644347 systemd[1]: run-containerd-runc-k8s.io-1e86c98fe4cd492a333dd0020c1156979727c796f710d412414d111e189b21a6-runc.eEDPIz.mount: Deactivated successfully. May 17 00:29:12.686866 containerd[1620]: time="2025-05-17T00:29:12.686708318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65b5bd8c4b-9dd9g,Uid:62172615-a35b-40f8-8043-de4d70d023f5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1e86c98fe4cd492a333dd0020c1156979727c796f710d412414d111e189b21a6\"" May 17 00:29:13.503685 systemd-networkd[1251]: calif116710b53b: Gained IPv6LL May 17 00:29:14.326724 containerd[1620]: time="2025-05-17T00:29:14.326671369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:14.334567 containerd[1620]: time="2025-05-17T00:29:14.334524970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 17 00:29:14.334951 containerd[1620]: time="2025-05-17T00:29:14.334613126Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:14.337598 containerd[1620]: time="2025-05-17T00:29:14.336699855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:14.337598 containerd[1620]: time="2025-05-17T00:29:14.337276468Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 3.374766639s" May 17 00:29:14.337598 containerd[1620]: time="2025-05-17T00:29:14.337299031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:29:14.340683 containerd[1620]: time="2025-05-17T00:29:14.340403741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 17 00:29:14.343190 containerd[1620]: time="2025-05-17T00:29:14.343124072Z" level=info msg="CreateContainer within sandbox \"6d0ed38d28b1c5e90843b8d74415a0cd25936fa24e711be030722178a77ad4a4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:29:14.366040 containerd[1620]: time="2025-05-17T00:29:14.364016623Z" level=info msg="CreateContainer within sandbox \"6d0ed38d28b1c5e90843b8d74415a0cd25936fa24e711be030722178a77ad4a4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"918bc2f06d6456374d9aea986914e4587446659825302592cf134c022691d08b\"" May 17 00:29:14.369058 containerd[1620]: time="2025-05-17T00:29:14.367468727Z" level=info msg="StartContainer for \"918bc2f06d6456374d9aea986914e4587446659825302592cf134c022691d08b\"" May 17 00:29:14.432978 containerd[1620]: time="2025-05-17T00:29:14.432953613Z" level=info msg="StartContainer for \"918bc2f06d6456374d9aea986914e4587446659825302592cf134c022691d08b\" returns successfully" May 17 00:29:14.463270 systemd-networkd[1251]: calic7051b56b4b: Gained IPv6LL May 17 00:29:15.684891 kubelet[2929]: I0517 00:29:15.684856 2929 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:29:18.280293 containerd[1620]: time="2025-05-17T00:29:18.280252072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:18.281574 containerd[1620]: time="2025-05-17T00:29:18.281533349Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 17 00:29:18.282497 containerd[1620]: time="2025-05-17T00:29:18.282374679Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:18.290255 containerd[1620]: time="2025-05-17T00:29:18.290082875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:18.291252 containerd[1620]: time="2025-05-17T00:29:18.291082052Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 3.950653713s" May 17 00:29:18.291252 containerd[1620]: time="2025-05-17T00:29:18.291105396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 17 00:29:18.292229 containerd[1620]: time="2025-05-17T00:29:18.292207496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 00:29:18.308515 containerd[1620]: time="2025-05-17T00:29:18.308462324Z" level=info msg="CreateContainer within sandbox \"00802f22242d0b6835f3c4d3ea09c626019189767b275913db895be51a551da5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 17 00:29:18.321607 containerd[1620]: time="2025-05-17T00:29:18.321581382Z" level=info msg="CreateContainer within sandbox \"00802f22242d0b6835f3c4d3ea09c626019189767b275913db895be51a551da5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b637eb454efe074115eb2909aed94dc9ef2050573134b633ec5acab58f62e73d\"" May 17 00:29:18.323260 containerd[1620]: time="2025-05-17T00:29:18.323239557Z" level=info msg="StartContainer for \"b637eb454efe074115eb2909aed94dc9ef2050573134b633ec5acab58f62e73d\"" May 17 00:29:18.399268 kubelet[2929]: I0517 00:29:18.384483 2929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-65b5bd8c4b-82zlb" podStartSLOduration=30.008038519 podStartE2EDuration="33.384468342s" podCreationTimestamp="2025-05-17 00:28:45 +0000 UTC" firstStartedPulling="2025-05-17 00:29:10.96178044 +0000 UTC m=+40.684210755" lastFinishedPulling="2025-05-17 00:29:14.338210262 +0000 UTC m=+44.060640578" observedRunningTime="2025-05-17 00:29:14.715597139 +0000 UTC m=+44.438027454" watchObservedRunningTime="2025-05-17 00:29:18.384468342 +0000 UTC m=+48.106898657" May 17 00:29:18.441341 containerd[1620]: time="2025-05-17T00:29:18.441306681Z" level=info msg="StartContainer for \"b637eb454efe074115eb2909aed94dc9ef2050573134b633ec5acab58f62e73d\" returns successfully" May 17 00:29:18.810039 kubelet[2929]: I0517 00:29:18.809942 2929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6b98696cc-8lr55" podStartSLOduration=23.550669442 podStartE2EDuration="30.80992573s" podCreationTimestamp="2025-05-17 00:28:48 +0000 UTC" firstStartedPulling="2025-05-17 00:29:11.032474601 +0000 UTC m=+40.754904917" lastFinishedPulling="2025-05-17 00:29:18.291730891 +0000 UTC m=+48.014161205" observedRunningTime="2025-05-17 00:29:18.754585414 +0000 UTC m=+48.477015739" watchObservedRunningTime="2025-05-17 00:29:18.80992573 +0000 UTC m=+48.532356045" May 17 00:29:19.935629 containerd[1620]: time="2025-05-17T00:29:19.935548206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:19.937579 containerd[1620]: time="2025-05-17T00:29:19.937476217Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 17 00:29:19.949326 containerd[1620]: time="2025-05-17T00:29:19.948756001Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:19.951107 containerd[1620]: time="2025-05-17T00:29:19.951088051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:19.951425 containerd[1620]: time="2025-05-17T00:29:19.951392913Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 1.65915997s" May 17 00:29:19.951822 containerd[1620]: time="2025-05-17T00:29:19.951423230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 17 00:29:19.952820 containerd[1620]: time="2025-05-17T00:29:19.952796459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:29:19.978084 containerd[1620]: time="2025-05-17T00:29:19.978053094Z" level=info msg="CreateContainer within sandbox \"5bcf70d4f950c80a6cd90f3dc99d75d569def02e4915496254e95f5335e22214\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 00:29:20.026505 containerd[1620]: time="2025-05-17T00:29:20.026414288Z" level=info msg="CreateContainer within sandbox \"5bcf70d4f950c80a6cd90f3dc99d75d569def02e4915496254e95f5335e22214\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"058a23bb53925809624b6e75138490ec92592c378a5e8c84c02cd3f35e7887a6\"" May 17 00:29:20.029286 containerd[1620]: time="2025-05-17T00:29:20.028575528Z" level=info msg="StartContainer for \"058a23bb53925809624b6e75138490ec92592c378a5e8c84c02cd3f35e7887a6\"" May 17 00:29:20.113367 containerd[1620]: time="2025-05-17T00:29:20.113327005Z" level=info msg="StartContainer for \"058a23bb53925809624b6e75138490ec92592c378a5e8c84c02cd3f35e7887a6\" returns successfully" May 17 00:29:20.438934 containerd[1620]: time="2025-05-17T00:29:20.438886452Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:20.440391 containerd[1620]: time="2025-05-17T00:29:20.440330755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 17 00:29:20.442370 containerd[1620]: time="2025-05-17T00:29:20.442339186Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 489.255057ms" May 17 00:29:20.442735 containerd[1620]: time="2025-05-17T00:29:20.442369664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:29:20.443276 containerd[1620]: time="2025-05-17T00:29:20.443152775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:29:20.445018 containerd[1620]: time="2025-05-17T00:29:20.444939361Z" level=info msg="CreateContainer within sandbox \"1e86c98fe4cd492a333dd0020c1156979727c796f710d412414d111e189b21a6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:29:20.460006 containerd[1620]: time="2025-05-17T00:29:20.459964888Z" level=info msg="CreateContainer within sandbox \"1e86c98fe4cd492a333dd0020c1156979727c796f710d412414d111e189b21a6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8e0c6858543a1cc18d1adaadc025e994e7a8bf07df7e243adcdf97760d31eb39\"" May 17 00:29:20.461179 containerd[1620]: time="2025-05-17T00:29:20.461141118Z" level=info msg="StartContainer for \"8e0c6858543a1cc18d1adaadc025e994e7a8bf07df7e243adcdf97760d31eb39\"" May 17 00:29:20.529409 containerd[1620]: time="2025-05-17T00:29:20.528610820Z" level=info msg="StartContainer for \"8e0c6858543a1cc18d1adaadc025e994e7a8bf07df7e243adcdf97760d31eb39\" returns successfully" May 17 00:29:20.777087 containerd[1620]: time="2025-05-17T00:29:20.776487774Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:29:20.779310 containerd[1620]: time="2025-05-17T00:29:20.778171286Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:29:20.779310 containerd[1620]: time="2025-05-17T00:29:20.778223545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:29:20.781769 kubelet[2929]: I0517 00:29:20.781155 2929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-65b5bd8c4b-9dd9g" podStartSLOduration=28.026867164 podStartE2EDuration="35.781143458s" podCreationTimestamp="2025-05-17 00:28:45 +0000 UTC" firstStartedPulling="2025-05-17 00:29:12.688719725 +0000 UTC m=+42.411150041" lastFinishedPulling="2025-05-17 00:29:20.44299602 +0000 UTC m=+50.165426335" observedRunningTime="2025-05-17 00:29:20.779257657 +0000 UTC m=+50.501687972" watchObservedRunningTime="2025-05-17 00:29:20.781143458 +0000 UTC m=+50.503573774" May 17 00:29:20.788074 kubelet[2929]: E0517 00:29:20.786708 2929 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:29:20.791316 kubelet[2929]: E0517 00:29:20.791294 2929 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:29:20.792921 containerd[1620]: time="2025-05-17T00:29:20.792885160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 00:29:20.795560 kubelet[2929]: E0517 00:29:20.795529 2929 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c907cd8c04324328b30a0d9a2949a437,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-67frm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f94896dd9-9mn8s_calico-system(edf2dfa8-9e01-4421-aebb-92beec01d94f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:29:21.769790 kubelet[2929]: I0517 00:29:21.769747 2929 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:29:22.582930 containerd[1620]: time="2025-05-17T00:29:22.582877934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:22.583627 containerd[1620]: time="2025-05-17T00:29:22.583496796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 17 00:29:22.584440 containerd[1620]: time="2025-05-17T00:29:22.584413679Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:22.587265 containerd[1620]: time="2025-05-17T00:29:22.586421800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:22.587265 containerd[1620]: time="2025-05-17T00:29:22.586973255Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 1.793531791s" May 17 00:29:22.587265 containerd[1620]: time="2025-05-17T00:29:22.586995136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 17 00:29:22.588598 containerd[1620]: time="2025-05-17T00:29:22.588582939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:29:22.590574 containerd[1620]: time="2025-05-17T00:29:22.590549241Z" level=info msg="CreateContainer within sandbox \"5bcf70d4f950c80a6cd90f3dc99d75d569def02e4915496254e95f5335e22214\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 00:29:22.613053 containerd[1620]: time="2025-05-17T00:29:22.612982961Z" level=info msg="CreateContainer within sandbox \"5bcf70d4f950c80a6cd90f3dc99d75d569def02e4915496254e95f5335e22214\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"fb018a0a07370731b3003d6cf3daa182938a7079092ed1eb1a5d07d9b555cd78\"" May 17 00:29:22.614208 containerd[1620]: time="2025-05-17T00:29:22.614190980Z" level=info msg="StartContainer for \"fb018a0a07370731b3003d6cf3daa182938a7079092ed1eb1a5d07d9b555cd78\"" May 17 00:29:22.719772 containerd[1620]: time="2025-05-17T00:29:22.719743066Z" level=info msg="StartContainer for \"fb018a0a07370731b3003d6cf3daa182938a7079092ed1eb1a5d07d9b555cd78\" returns successfully" May 17 00:29:22.791962 kubelet[2929]: I0517 00:29:22.791904 2929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-jv7dj" podStartSLOduration=23.947502562 podStartE2EDuration="34.791887313s" podCreationTimestamp="2025-05-17 00:28:48 +0000 UTC" firstStartedPulling="2025-05-17 00:29:11.743923201 +0000 UTC m=+41.466353516" lastFinishedPulling="2025-05-17 00:29:22.588307952 +0000 UTC m=+52.310738267" observedRunningTime="2025-05-17 00:29:22.787396719 +0000 UTC m=+52.509827035" watchObservedRunningTime="2025-05-17 00:29:22.791887313 +0000 UTC m=+52.514317628" May 17 00:29:22.915729 containerd[1620]: time="2025-05-17T00:29:22.915057094Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:29:22.916372 containerd[1620]: time="2025-05-17T00:29:22.916323442Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:29:22.916924 containerd[1620]: time="2025-05-17T00:29:22.916886531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:29:22.917279 kubelet[2929]: E0517 00:29:22.917012 2929 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:29:22.917279 kubelet[2929]: E0517 00:29:22.917066 2929 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:29:22.917279 kubelet[2929]: E0517 00:29:22.917161 2929 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-67frm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f94896dd9-9mn8s_calico-system(edf2dfa8-9e01-4421-aebb-92beec01d94f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:29:22.930732 kubelet[2929]: E0517 00:29:22.930655 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5f94896dd9-9mn8s" podUID="edf2dfa8-9e01-4421-aebb-92beec01d94f" May 17 00:29:23.689148 kubelet[2929]: I0517 00:29:23.685605 2929 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 00:29:23.691014 kubelet[2929]: I0517 00:29:23.690968 2929 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 00:29:26.379011 containerd[1620]: time="2025-05-17T00:29:26.378818779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:29:26.687477 containerd[1620]: time="2025-05-17T00:29:26.687348844Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:29:26.688629 containerd[1620]: time="2025-05-17T00:29:26.688579195Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:29:26.688816 containerd[1620]: time="2025-05-17T00:29:26.688698428Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:29:26.688848 kubelet[2929]: E0517 00:29:26.688806 2929 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:29:26.689199 kubelet[2929]: E0517 00:29:26.688853 2929 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:29:26.689199 kubelet[2929]: E0517 00:29:26.688961 2929 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4r67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-7b9h8_calico-system(fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:29:26.690567 kubelet[2929]: E0517 00:29:26.690391 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-7b9h8" podUID="fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef" May 17 00:29:30.551216 containerd[1620]: time="2025-05-17T00:29:30.550926899Z" level=info msg="StopPodSandbox for \"ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06\"" May 17 00:29:31.126812 containerd[1620]: 2025-05-17 00:29:30.860 [WARNING][5454] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--556bea0d1e-k8s-calico--kube--controllers--6b98696cc--8lr55-eth0", GenerateName:"calico-kube-controllers-6b98696cc-", Namespace:"calico-system", SelfLink:"", UID:"0cc79bb9-7a42-43e7-a121-7181de14309d", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b98696cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-556bea0d1e", ContainerID:"00802f22242d0b6835f3c4d3ea09c626019189767b275913db895be51a551da5", Pod:"calico-kube-controllers-6b98696cc-8lr55", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.15.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid6fb3df06e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:29:31.126812 containerd[1620]: 2025-05-17 00:29:30.865 [INFO][5454] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" May 17 00:29:31.126812 containerd[1620]: 2025-05-17 00:29:30.865 [INFO][5454] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" iface="eth0" netns="" May 17 00:29:31.126812 containerd[1620]: 2025-05-17 00:29:30.865 [INFO][5454] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" May 17 00:29:31.126812 containerd[1620]: 2025-05-17 00:29:30.866 [INFO][5454] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" May 17 00:29:31.126812 containerd[1620]: 2025-05-17 00:29:31.084 [INFO][5461] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" HandleID="k8s-pod-network.ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--kube--controllers--6b98696cc--8lr55-eth0" May 17 00:29:31.126812 containerd[1620]: 2025-05-17 00:29:31.090 [INFO][5461] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:29:31.126812 containerd[1620]: 2025-05-17 00:29:31.090 [INFO][5461] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:29:31.126812 containerd[1620]: 2025-05-17 00:29:31.111 [WARNING][5461] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" HandleID="k8s-pod-network.ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--kube--controllers--6b98696cc--8lr55-eth0" May 17 00:29:31.126812 containerd[1620]: 2025-05-17 00:29:31.111 [INFO][5461] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" HandleID="k8s-pod-network.ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--kube--controllers--6b98696cc--8lr55-eth0" May 17 00:29:31.126812 containerd[1620]: 2025-05-17 00:29:31.113 [INFO][5461] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:29:31.126812 containerd[1620]: 2025-05-17 00:29:31.119 [INFO][5454] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" May 17 00:29:31.135936 containerd[1620]: time="2025-05-17T00:29:31.126992158Z" level=info msg="TearDown network for sandbox \"ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06\" successfully" May 17 00:29:31.135936 containerd[1620]: time="2025-05-17T00:29:31.127013768Z" level=info msg="StopPodSandbox for \"ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06\" returns successfully" May 17 00:29:31.172647 containerd[1620]: time="2025-05-17T00:29:31.172608996Z" level=info msg="RemovePodSandbox for \"ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06\"" May 17 00:29:31.175392 containerd[1620]: time="2025-05-17T00:29:31.175366142Z" level=info msg="Forcibly stopping sandbox \"ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06\"" May 17 00:29:31.251872 containerd[1620]: 2025-05-17 00:29:31.215 [WARNING][5475] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--556bea0d1e-k8s-calico--kube--controllers--6b98696cc--8lr55-eth0", GenerateName:"calico-kube-controllers-6b98696cc-", Namespace:"calico-system", SelfLink:"", UID:"0cc79bb9-7a42-43e7-a121-7181de14309d", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b98696cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-556bea0d1e", ContainerID:"00802f22242d0b6835f3c4d3ea09c626019189767b275913db895be51a551da5", Pod:"calico-kube-controllers-6b98696cc-8lr55", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.15.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid6fb3df06e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:29:31.251872 containerd[1620]: 2025-05-17 00:29:31.216 [INFO][5475] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" May 17 00:29:31.251872 containerd[1620]: 2025-05-17 00:29:31.216 [INFO][5475] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" iface="eth0" netns="" May 17 00:29:31.251872 containerd[1620]: 2025-05-17 00:29:31.216 [INFO][5475] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" May 17 00:29:31.251872 containerd[1620]: 2025-05-17 00:29:31.216 [INFO][5475] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" May 17 00:29:31.251872 containerd[1620]: 2025-05-17 00:29:31.236 [INFO][5483] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" HandleID="k8s-pod-network.ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--kube--controllers--6b98696cc--8lr55-eth0" May 17 00:29:31.251872 containerd[1620]: 2025-05-17 00:29:31.236 [INFO][5483] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:29:31.251872 containerd[1620]: 2025-05-17 00:29:31.236 [INFO][5483] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:29:31.251872 containerd[1620]: 2025-05-17 00:29:31.241 [WARNING][5483] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" HandleID="k8s-pod-network.ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--kube--controllers--6b98696cc--8lr55-eth0" May 17 00:29:31.251872 containerd[1620]: 2025-05-17 00:29:31.241 [INFO][5483] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" HandleID="k8s-pod-network.ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--kube--controllers--6b98696cc--8lr55-eth0" May 17 00:29:31.251872 containerd[1620]: 2025-05-17 00:29:31.243 [INFO][5483] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:29:31.251872 containerd[1620]: 2025-05-17 00:29:31.246 [INFO][5475] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06" May 17 00:29:31.255193 containerd[1620]: time="2025-05-17T00:29:31.251907430Z" level=info msg="TearDown network for sandbox \"ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06\" successfully" May 17 00:29:31.266450 containerd[1620]: time="2025-05-17T00:29:31.266410303Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:29:31.272014 containerd[1620]: time="2025-05-17T00:29:31.271985763Z" level=info msg="RemovePodSandbox \"ec732d11f2bf14b409433f41dbbf61dbba0fe798526cc79e12a921e888b50f06\" returns successfully" May 17 00:29:31.282189 containerd[1620]: time="2025-05-17T00:29:31.282161410Z" level=info msg="StopPodSandbox for \"70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9\"" May 17 00:29:31.301636 systemd-journald[1178]: Under memory pressure, flushing caches. May 17 00:29:31.301339 systemd-resolved[1509]: Under memory pressure, flushing caches. May 17 00:29:31.301371 systemd-resolved[1509]: Flushed all caches. May 17 00:29:31.341058 containerd[1620]: 2025-05-17 00:29:31.312 [WARNING][5497] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--82zlb-eth0", GenerateName:"calico-apiserver-65b5bd8c4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"0af51042-daa9-4020-979a-c14dc1a38805", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 28, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65b5bd8c4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-556bea0d1e", ContainerID:"6d0ed38d28b1c5e90843b8d74415a0cd25936fa24e711be030722178a77ad4a4", Pod:"calico-apiserver-65b5bd8c4b-82zlb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb2b7c53c50", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:29:31.341058 containerd[1620]: 2025-05-17 00:29:31.312 [INFO][5497] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" May 17 00:29:31.341058 containerd[1620]: 2025-05-17 00:29:31.312 [INFO][5497] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" iface="eth0" netns="" May 17 00:29:31.341058 containerd[1620]: 2025-05-17 00:29:31.312 [INFO][5497] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" May 17 00:29:31.341058 containerd[1620]: 2025-05-17 00:29:31.312 [INFO][5497] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" May 17 00:29:31.341058 containerd[1620]: 2025-05-17 00:29:31.331 [INFO][5504] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" HandleID="k8s-pod-network.70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--82zlb-eth0" May 17 00:29:31.341058 containerd[1620]: 2025-05-17 00:29:31.331 [INFO][5504] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:29:31.341058 containerd[1620]: 2025-05-17 00:29:31.331 [INFO][5504] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:29:31.341058 containerd[1620]: 2025-05-17 00:29:31.336 [WARNING][5504] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" HandleID="k8s-pod-network.70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--82zlb-eth0" May 17 00:29:31.341058 containerd[1620]: 2025-05-17 00:29:31.336 [INFO][5504] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" HandleID="k8s-pod-network.70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--82zlb-eth0" May 17 00:29:31.341058 containerd[1620]: 2025-05-17 00:29:31.337 [INFO][5504] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:29:31.341058 containerd[1620]: 2025-05-17 00:29:31.339 [INFO][5497] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" May 17 00:29:31.342238 containerd[1620]: time="2025-05-17T00:29:31.341332931Z" level=info msg="TearDown network for sandbox \"70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9\" successfully" May 17 00:29:31.342238 containerd[1620]: time="2025-05-17T00:29:31.341355884Z" level=info msg="StopPodSandbox for \"70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9\" returns successfully" May 17 00:29:31.342238 containerd[1620]: time="2025-05-17T00:29:31.341767917Z" level=info msg="RemovePodSandbox for \"70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9\"" May 17 00:29:31.342238 containerd[1620]: time="2025-05-17T00:29:31.341790038Z" level=info msg="Forcibly stopping sandbox \"70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9\"" May 17 00:29:31.393515 containerd[1620]: 2025-05-17 00:29:31.366 [WARNING][5518] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--82zlb-eth0", GenerateName:"calico-apiserver-65b5bd8c4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"0af51042-daa9-4020-979a-c14dc1a38805", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 28, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65b5bd8c4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-556bea0d1e", ContainerID:"6d0ed38d28b1c5e90843b8d74415a0cd25936fa24e711be030722178a77ad4a4", Pod:"calico-apiserver-65b5bd8c4b-82zlb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb2b7c53c50", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:29:31.393515 containerd[1620]: 2025-05-17 00:29:31.367 [INFO][5518] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" May 17 00:29:31.393515 containerd[1620]: 2025-05-17 00:29:31.367 [INFO][5518] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" iface="eth0" netns="" May 17 00:29:31.393515 containerd[1620]: 2025-05-17 00:29:31.367 [INFO][5518] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" May 17 00:29:31.393515 containerd[1620]: 2025-05-17 00:29:31.367 [INFO][5518] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" May 17 00:29:31.393515 containerd[1620]: 2025-05-17 00:29:31.384 [INFO][5525] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" HandleID="k8s-pod-network.70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--82zlb-eth0" May 17 00:29:31.393515 containerd[1620]: 2025-05-17 00:29:31.384 [INFO][5525] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:29:31.393515 containerd[1620]: 2025-05-17 00:29:31.384 [INFO][5525] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:29:31.393515 containerd[1620]: 2025-05-17 00:29:31.389 [WARNING][5525] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" HandleID="k8s-pod-network.70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--82zlb-eth0" May 17 00:29:31.393515 containerd[1620]: 2025-05-17 00:29:31.389 [INFO][5525] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" HandleID="k8s-pod-network.70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--82zlb-eth0" May 17 00:29:31.393515 containerd[1620]: 2025-05-17 00:29:31.390 [INFO][5525] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:29:31.393515 containerd[1620]: 2025-05-17 00:29:31.391 [INFO][5518] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9" May 17 00:29:31.393515 containerd[1620]: time="2025-05-17T00:29:31.393453461Z" level=info msg="TearDown network for sandbox \"70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9\" successfully" May 17 00:29:31.397177 containerd[1620]: time="2025-05-17T00:29:31.397148260Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:29:31.397256 containerd[1620]: time="2025-05-17T00:29:31.397208042Z" level=info msg="RemovePodSandbox \"70ec868d271dd1be74cf28e30e6cd353963f0ab338e242d0f7dcd58ec1993bf9\" returns successfully" May 17 00:29:31.397625 containerd[1620]: time="2025-05-17T00:29:31.397608233Z" level=info msg="StopPodSandbox for \"ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee\"" May 17 00:29:31.456004 containerd[1620]: 2025-05-17 00:29:31.423 [WARNING][5540] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--556bea0d1e-k8s-csi--node--driver--jv7dj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9def7614-7d88-42d1-ba99-91a4539e16ec", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-556bea0d1e", ContainerID:"5bcf70d4f950c80a6cd90f3dc99d75d569def02e4915496254e95f5335e22214", Pod:"csi-node-driver-jv7dj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.15.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif116710b53b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:29:31.456004 containerd[1620]: 2025-05-17 00:29:31.424 [INFO][5540] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" May 17 00:29:31.456004 containerd[1620]: 2025-05-17 00:29:31.424 [INFO][5540] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" iface="eth0" netns="" May 17 00:29:31.456004 containerd[1620]: 2025-05-17 00:29:31.424 [INFO][5540] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" May 17 00:29:31.456004 containerd[1620]: 2025-05-17 00:29:31.424 [INFO][5540] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" May 17 00:29:31.456004 containerd[1620]: 2025-05-17 00:29:31.443 [INFO][5548] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" HandleID="k8s-pod-network.ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" Workload="ci--4081--3--3--n--556bea0d1e-k8s-csi--node--driver--jv7dj-eth0" May 17 00:29:31.456004 containerd[1620]: 2025-05-17 00:29:31.443 [INFO][5548] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:29:31.456004 containerd[1620]: 2025-05-17 00:29:31.443 [INFO][5548] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:29:31.456004 containerd[1620]: 2025-05-17 00:29:31.450 [WARNING][5548] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" HandleID="k8s-pod-network.ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" Workload="ci--4081--3--3--n--556bea0d1e-k8s-csi--node--driver--jv7dj-eth0" May 17 00:29:31.456004 containerd[1620]: 2025-05-17 00:29:31.450 [INFO][5548] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" HandleID="k8s-pod-network.ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" Workload="ci--4081--3--3--n--556bea0d1e-k8s-csi--node--driver--jv7dj-eth0" May 17 00:29:31.456004 containerd[1620]: 2025-05-17 00:29:31.451 [INFO][5548] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:29:31.456004 containerd[1620]: 2025-05-17 00:29:31.453 [INFO][5540] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" May 17 00:29:31.456444 containerd[1620]: time="2025-05-17T00:29:31.456079698Z" level=info msg="TearDown network for sandbox \"ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee\" successfully" May 17 00:29:31.456444 containerd[1620]: time="2025-05-17T00:29:31.456100638Z" level=info msg="StopPodSandbox for \"ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee\" returns successfully" May 17 00:29:31.456931 containerd[1620]: time="2025-05-17T00:29:31.456880753Z" level=info msg="RemovePodSandbox for \"ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee\"" May 17 00:29:31.456979 containerd[1620]: time="2025-05-17T00:29:31.456904177Z" level=info msg="Forcibly stopping sandbox \"ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee\"" May 17 00:29:31.511549 containerd[1620]: 2025-05-17 00:29:31.483 [WARNING][5562] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--556bea0d1e-k8s-csi--node--driver--jv7dj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9def7614-7d88-42d1-ba99-91a4539e16ec", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-556bea0d1e", ContainerID:"5bcf70d4f950c80a6cd90f3dc99d75d569def02e4915496254e95f5335e22214", Pod:"csi-node-driver-jv7dj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.15.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif116710b53b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:29:31.511549 containerd[1620]: 2025-05-17 00:29:31.483 [INFO][5562] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" May 17 00:29:31.511549 containerd[1620]: 2025-05-17 00:29:31.483 [INFO][5562] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" iface="eth0" netns="" May 17 00:29:31.511549 containerd[1620]: 2025-05-17 00:29:31.484 [INFO][5562] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" May 17 00:29:31.511549 containerd[1620]: 2025-05-17 00:29:31.484 [INFO][5562] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" May 17 00:29:31.511549 containerd[1620]: 2025-05-17 00:29:31.502 [INFO][5570] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" HandleID="k8s-pod-network.ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" Workload="ci--4081--3--3--n--556bea0d1e-k8s-csi--node--driver--jv7dj-eth0" May 17 00:29:31.511549 containerd[1620]: 2025-05-17 00:29:31.502 [INFO][5570] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:29:31.511549 containerd[1620]: 2025-05-17 00:29:31.502 [INFO][5570] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:29:31.511549 containerd[1620]: 2025-05-17 00:29:31.506 [WARNING][5570] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" HandleID="k8s-pod-network.ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" Workload="ci--4081--3--3--n--556bea0d1e-k8s-csi--node--driver--jv7dj-eth0" May 17 00:29:31.511549 containerd[1620]: 2025-05-17 00:29:31.506 [INFO][5570] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" HandleID="k8s-pod-network.ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" Workload="ci--4081--3--3--n--556bea0d1e-k8s-csi--node--driver--jv7dj-eth0" May 17 00:29:31.511549 containerd[1620]: 2025-05-17 00:29:31.508 [INFO][5570] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:29:31.511549 containerd[1620]: 2025-05-17 00:29:31.509 [INFO][5562] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee" May 17 00:29:31.512001 containerd[1620]: time="2025-05-17T00:29:31.511561171Z" level=info msg="TearDown network for sandbox \"ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee\" successfully" May 17 00:29:31.514826 containerd[1620]: time="2025-05-17T00:29:31.514800242Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:29:31.514880 containerd[1620]: time="2025-05-17T00:29:31.514845397Z" level=info msg="RemovePodSandbox \"ed52c75a5f2f17f0b412f707461e80af7ef123c353e646a573b930f578a757ee\" returns successfully" May 17 00:29:31.515325 containerd[1620]: time="2025-05-17T00:29:31.515299370Z" level=info msg="StopPodSandbox for \"6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4\"" May 17 00:29:31.567903 containerd[1620]: 2025-05-17 00:29:31.541 [WARNING][5584] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--ppcxd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2834e46b-4ada-426a-b1e7-b513f359ad04", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 28, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-556bea0d1e", ContainerID:"8902633197f01b25657dbd90992e4ba79dfb64a7b45cf717991eb6044632ae15", Pod:"coredns-7c65d6cfc9-ppcxd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf3e24f3903", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:29:31.567903 containerd[1620]: 2025-05-17 00:29:31.541 [INFO][5584] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" May 17 00:29:31.567903 containerd[1620]: 2025-05-17 00:29:31.541 [INFO][5584] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" iface="eth0" netns="" May 17 00:29:31.567903 containerd[1620]: 2025-05-17 00:29:31.541 [INFO][5584] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" May 17 00:29:31.567903 containerd[1620]: 2025-05-17 00:29:31.541 [INFO][5584] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" May 17 00:29:31.567903 containerd[1620]: 2025-05-17 00:29:31.558 [INFO][5592] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" HandleID="k8s-pod-network.6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" Workload="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--ppcxd-eth0" May 17 00:29:31.567903 containerd[1620]: 2025-05-17 00:29:31.559 [INFO][5592] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:29:31.567903 containerd[1620]: 2025-05-17 00:29:31.559 [INFO][5592] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:29:31.567903 containerd[1620]: 2025-05-17 00:29:31.563 [WARNING][5592] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" HandleID="k8s-pod-network.6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" Workload="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--ppcxd-eth0" May 17 00:29:31.567903 containerd[1620]: 2025-05-17 00:29:31.563 [INFO][5592] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" HandleID="k8s-pod-network.6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" Workload="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--ppcxd-eth0" May 17 00:29:31.567903 containerd[1620]: 2025-05-17 00:29:31.564 [INFO][5592] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:29:31.567903 containerd[1620]: 2025-05-17 00:29:31.566 [INFO][5584] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" May 17 00:29:31.569225 containerd[1620]: time="2025-05-17T00:29:31.567951358Z" level=info msg="TearDown network for sandbox \"6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4\" successfully" May 17 00:29:31.569225 containerd[1620]: time="2025-05-17T00:29:31.567972918Z" level=info msg="StopPodSandbox for \"6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4\" returns successfully" May 17 00:29:31.569225 containerd[1620]: time="2025-05-17T00:29:31.568491994Z" level=info msg="RemovePodSandbox for \"6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4\"" May 17 00:29:31.569225 containerd[1620]: time="2025-05-17T00:29:31.568514105Z" level=info msg="Forcibly stopping sandbox \"6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4\"" May 17 00:29:31.627539 containerd[1620]: 2025-05-17 00:29:31.597 [WARNING][5607] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--ppcxd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2834e46b-4ada-426a-b1e7-b513f359ad04", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 28, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-556bea0d1e", ContainerID:"8902633197f01b25657dbd90992e4ba79dfb64a7b45cf717991eb6044632ae15", Pod:"coredns-7c65d6cfc9-ppcxd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf3e24f3903", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:29:31.627539 containerd[1620]: 2025-05-17 00:29:31.597 [INFO][5607] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" May 17 00:29:31.627539 containerd[1620]: 2025-05-17 00:29:31.597 [INFO][5607] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" iface="eth0" netns="" May 17 00:29:31.627539 containerd[1620]: 2025-05-17 00:29:31.597 [INFO][5607] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" May 17 00:29:31.627539 containerd[1620]: 2025-05-17 00:29:31.598 [INFO][5607] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" May 17 00:29:31.627539 containerd[1620]: 2025-05-17 00:29:31.616 [INFO][5614] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" HandleID="k8s-pod-network.6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" Workload="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--ppcxd-eth0" May 17 00:29:31.627539 containerd[1620]: 2025-05-17 00:29:31.616 [INFO][5614] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:29:31.627539 containerd[1620]: 2025-05-17 00:29:31.616 [INFO][5614] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:29:31.627539 containerd[1620]: 2025-05-17 00:29:31.621 [WARNING][5614] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" HandleID="k8s-pod-network.6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" Workload="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--ppcxd-eth0" May 17 00:29:31.627539 containerd[1620]: 2025-05-17 00:29:31.621 [INFO][5614] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" HandleID="k8s-pod-network.6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" Workload="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--ppcxd-eth0" May 17 00:29:31.627539 containerd[1620]: 2025-05-17 00:29:31.623 [INFO][5614] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:29:31.627539 containerd[1620]: 2025-05-17 00:29:31.625 [INFO][5607] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4" May 17 00:29:31.629644 containerd[1620]: time="2025-05-17T00:29:31.627930195Z" level=info msg="TearDown network for sandbox \"6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4\" successfully" May 17 00:29:31.632071 containerd[1620]: time="2025-05-17T00:29:31.632031236Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:29:31.632139 containerd[1620]: time="2025-05-17T00:29:31.632098381Z" level=info msg="RemovePodSandbox \"6d5b6a71462253a75e687c6f7e1707e9bc9a8b91c3f016007cd47ed84dea64b4\" returns successfully" May 17 00:29:31.632621 containerd[1620]: time="2025-05-17T00:29:31.632592339Z" level=info msg="StopPodSandbox for \"28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536\"" May 17 00:29:31.694172 containerd[1620]: 2025-05-17 00:29:31.664 [WARNING][5629] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-whisker--5c9dc4c697--5tjgw-eth0" May 17 00:29:31.694172 containerd[1620]: 2025-05-17 00:29:31.664 [INFO][5629] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" May 17 00:29:31.694172 containerd[1620]: 2025-05-17 00:29:31.664 [INFO][5629] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" iface="eth0" netns="" May 17 00:29:31.694172 containerd[1620]: 2025-05-17 00:29:31.664 [INFO][5629] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" May 17 00:29:31.694172 containerd[1620]: 2025-05-17 00:29:31.664 [INFO][5629] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" May 17 00:29:31.694172 containerd[1620]: 2025-05-17 00:29:31.684 [INFO][5636] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" HandleID="k8s-pod-network.28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" Workload="ci--4081--3--3--n--556bea0d1e-k8s-whisker--5c9dc4c697--5tjgw-eth0" May 17 00:29:31.694172 containerd[1620]: 2025-05-17 00:29:31.684 [INFO][5636] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:29:31.694172 containerd[1620]: 2025-05-17 00:29:31.685 [INFO][5636] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:29:31.694172 containerd[1620]: 2025-05-17 00:29:31.689 [WARNING][5636] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" HandleID="k8s-pod-network.28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" Workload="ci--4081--3--3--n--556bea0d1e-k8s-whisker--5c9dc4c697--5tjgw-eth0" May 17 00:29:31.694172 containerd[1620]: 2025-05-17 00:29:31.689 [INFO][5636] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" HandleID="k8s-pod-network.28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" Workload="ci--4081--3--3--n--556bea0d1e-k8s-whisker--5c9dc4c697--5tjgw-eth0" May 17 00:29:31.694172 containerd[1620]: 2025-05-17 00:29:31.690 [INFO][5636] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:29:31.694172 containerd[1620]: 2025-05-17 00:29:31.692 [INFO][5629] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" May 17 00:29:31.694172 containerd[1620]: time="2025-05-17T00:29:31.694018904Z" level=info msg="TearDown network for sandbox \"28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536\" successfully" May 17 00:29:31.694172 containerd[1620]: time="2025-05-17T00:29:31.694057317Z" level=info msg="StopPodSandbox for \"28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536\" returns successfully" May 17 00:29:31.695706 containerd[1620]: time="2025-05-17T00:29:31.694455625Z" level=info msg="RemovePodSandbox for \"28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536\"" May 17 00:29:31.695706 containerd[1620]: time="2025-05-17T00:29:31.694477305Z" level=info msg="Forcibly stopping sandbox \"28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536\"" May 17 00:29:31.758732 containerd[1620]: 2025-05-17 00:29:31.720 [WARNING][5650] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" WorkloadEndpoint="ci--4081--3--3--n--556bea0d1e-k8s-whisker--5c9dc4c697--5tjgw-eth0" May 17 00:29:31.758732 containerd[1620]: 2025-05-17 00:29:31.721 [INFO][5650] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" May 17 00:29:31.758732 containerd[1620]: 2025-05-17 00:29:31.721 [INFO][5650] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" iface="eth0" netns="" May 17 00:29:31.758732 containerd[1620]: 2025-05-17 00:29:31.721 [INFO][5650] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" May 17 00:29:31.758732 containerd[1620]: 2025-05-17 00:29:31.721 [INFO][5650] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" May 17 00:29:31.758732 containerd[1620]: 2025-05-17 00:29:31.747 [INFO][5657] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" HandleID="k8s-pod-network.28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" Workload="ci--4081--3--3--n--556bea0d1e-k8s-whisker--5c9dc4c697--5tjgw-eth0" May 17 00:29:31.758732 containerd[1620]: 2025-05-17 00:29:31.747 [INFO][5657] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:29:31.758732 containerd[1620]: 2025-05-17 00:29:31.747 [INFO][5657] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:29:31.758732 containerd[1620]: 2025-05-17 00:29:31.752 [WARNING][5657] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" HandleID="k8s-pod-network.28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" Workload="ci--4081--3--3--n--556bea0d1e-k8s-whisker--5c9dc4c697--5tjgw-eth0" May 17 00:29:31.758732 containerd[1620]: 2025-05-17 00:29:31.752 [INFO][5657] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" HandleID="k8s-pod-network.28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" Workload="ci--4081--3--3--n--556bea0d1e-k8s-whisker--5c9dc4c697--5tjgw-eth0" May 17 00:29:31.758732 containerd[1620]: 2025-05-17 00:29:31.755 [INFO][5657] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:29:31.758732 containerd[1620]: 2025-05-17 00:29:31.757 [INFO][5650] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536" May 17 00:29:31.758732 containerd[1620]: time="2025-05-17T00:29:31.758665605Z" level=info msg="TearDown network for sandbox \"28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536\" successfully" May 17 00:29:31.762418 containerd[1620]: time="2025-05-17T00:29:31.762224799Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:29:31.762418 containerd[1620]: time="2025-05-17T00:29:31.762274582Z" level=info msg="RemovePodSandbox \"28b1d2f0511a38a050bed534b4b457408e1e18f400f08c7a5e18726479603536\" returns successfully" May 17 00:29:31.763535 containerd[1620]: time="2025-05-17T00:29:31.763354501Z" level=info msg="StopPodSandbox for \"fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc\"" May 17 00:29:31.823773 containerd[1620]: 2025-05-17 00:29:31.797 [WARNING][5672] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--556bea0d1e-k8s-goldmane--8f77d7b6c--7b9h8-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-556bea0d1e", ContainerID:"64af4c07793a9d80ce13c4ae3eb0255c34e1379d164d8fb60a0b476865d766f9", Pod:"goldmane-8f77d7b6c-7b9h8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.15.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali722291eb0f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:29:31.823773 containerd[1620]: 2025-05-17 00:29:31.798 [INFO][5672] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" May 17 00:29:31.823773 containerd[1620]: 2025-05-17 00:29:31.798 [INFO][5672] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" iface="eth0" netns="" May 17 00:29:31.823773 containerd[1620]: 2025-05-17 00:29:31.798 [INFO][5672] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" May 17 00:29:31.823773 containerd[1620]: 2025-05-17 00:29:31.798 [INFO][5672] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" May 17 00:29:31.823773 containerd[1620]: 2025-05-17 00:29:31.813 [INFO][5679] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" HandleID="k8s-pod-network.fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" Workload="ci--4081--3--3--n--556bea0d1e-k8s-goldmane--8f77d7b6c--7b9h8-eth0" May 17 00:29:31.823773 containerd[1620]: 2025-05-17 00:29:31.813 [INFO][5679] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:29:31.823773 containerd[1620]: 2025-05-17 00:29:31.813 [INFO][5679] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:29:31.823773 containerd[1620]: 2025-05-17 00:29:31.818 [WARNING][5679] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" HandleID="k8s-pod-network.fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" Workload="ci--4081--3--3--n--556bea0d1e-k8s-goldmane--8f77d7b6c--7b9h8-eth0" May 17 00:29:31.823773 containerd[1620]: 2025-05-17 00:29:31.818 [INFO][5679] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" HandleID="k8s-pod-network.fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" Workload="ci--4081--3--3--n--556bea0d1e-k8s-goldmane--8f77d7b6c--7b9h8-eth0" May 17 00:29:31.823773 containerd[1620]: 2025-05-17 00:29:31.819 [INFO][5679] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:29:31.823773 containerd[1620]: 2025-05-17 00:29:31.821 [INFO][5672] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" May 17 00:29:31.824293 containerd[1620]: time="2025-05-17T00:29:31.823852442Z" level=info msg="TearDown network for sandbox \"fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc\" successfully" May 17 00:29:31.824293 containerd[1620]: time="2025-05-17T00:29:31.823876838Z" level=info msg="StopPodSandbox for \"fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc\" returns successfully" May 17 00:29:31.825205 containerd[1620]: time="2025-05-17T00:29:31.825139278Z" level=info msg="RemovePodSandbox for \"fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc\"" May 17 00:29:31.825205 containerd[1620]: time="2025-05-17T00:29:31.825164486Z" level=info msg="Forcibly stopping sandbox \"fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc\"" May 17 00:29:31.954819 containerd[1620]: 2025-05-17 00:29:31.893 [WARNING][5693] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--556bea0d1e-k8s-goldmane--8f77d7b6c--7b9h8-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-556bea0d1e", ContainerID:"64af4c07793a9d80ce13c4ae3eb0255c34e1379d164d8fb60a0b476865d766f9", Pod:"goldmane-8f77d7b6c-7b9h8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.15.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali722291eb0f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:29:31.954819 containerd[1620]: 2025-05-17 00:29:31.895 [INFO][5693] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" May 17 00:29:31.954819 containerd[1620]: 2025-05-17 00:29:31.895 [INFO][5693] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" iface="eth0" netns="" May 17 00:29:31.954819 containerd[1620]: 2025-05-17 00:29:31.895 [INFO][5693] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" May 17 00:29:31.954819 containerd[1620]: 2025-05-17 00:29:31.895 [INFO][5693] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" May 17 00:29:31.954819 containerd[1620]: 2025-05-17 00:29:31.943 [INFO][5700] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" HandleID="k8s-pod-network.fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" Workload="ci--4081--3--3--n--556bea0d1e-k8s-goldmane--8f77d7b6c--7b9h8-eth0" May 17 00:29:31.954819 containerd[1620]: 2025-05-17 00:29:31.943 [INFO][5700] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:29:31.954819 containerd[1620]: 2025-05-17 00:29:31.943 [INFO][5700] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:29:31.954819 containerd[1620]: 2025-05-17 00:29:31.948 [WARNING][5700] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" HandleID="k8s-pod-network.fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" Workload="ci--4081--3--3--n--556bea0d1e-k8s-goldmane--8f77d7b6c--7b9h8-eth0" May 17 00:29:31.954819 containerd[1620]: 2025-05-17 00:29:31.949 [INFO][5700] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" HandleID="k8s-pod-network.fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" Workload="ci--4081--3--3--n--556bea0d1e-k8s-goldmane--8f77d7b6c--7b9h8-eth0" May 17 00:29:31.954819 containerd[1620]: 2025-05-17 00:29:31.950 [INFO][5700] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:29:31.954819 containerd[1620]: 2025-05-17 00:29:31.953 [INFO][5693] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc" May 17 00:29:31.956039 containerd[1620]: time="2025-05-17T00:29:31.955370893Z" level=info msg="TearDown network for sandbox \"fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc\" successfully" May 17 00:29:31.960688 containerd[1620]: time="2025-05-17T00:29:31.960479967Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:29:31.960688 containerd[1620]: time="2025-05-17T00:29:31.960525051Z" level=info msg="RemovePodSandbox \"fe17dae483774da4ab6cf9076c438f0a7b25327c06e1fc021de4c38b2ea436fc\" returns successfully" May 17 00:29:31.961301 containerd[1620]: time="2025-05-17T00:29:31.961112004Z" level=info msg="StopPodSandbox for \"469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f\"" May 17 00:29:32.020424 containerd[1620]: 2025-05-17 00:29:31.992 [WARNING][5714] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--9dd9g-eth0", GenerateName:"calico-apiserver-65b5bd8c4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"62172615-a35b-40f8-8043-de4d70d023f5", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 28, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65b5bd8c4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-556bea0d1e", ContainerID:"1e86c98fe4cd492a333dd0020c1156979727c796f710d412414d111e189b21a6", Pod:"calico-apiserver-65b5bd8c4b-9dd9g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7051b56b4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:29:32.020424 containerd[1620]: 2025-05-17 00:29:31.992 [INFO][5714] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" May 17 00:29:32.020424 containerd[1620]: 2025-05-17 00:29:31.992 [INFO][5714] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" iface="eth0" netns="" May 17 00:29:32.020424 containerd[1620]: 2025-05-17 00:29:31.992 [INFO][5714] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" May 17 00:29:32.020424 containerd[1620]: 2025-05-17 00:29:31.992 [INFO][5714] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" May 17 00:29:32.020424 containerd[1620]: 2025-05-17 00:29:32.010 [INFO][5721] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" HandleID="k8s-pod-network.469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--9dd9g-eth0" May 17 00:29:32.020424 containerd[1620]: 2025-05-17 00:29:32.010 [INFO][5721] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:29:32.020424 containerd[1620]: 2025-05-17 00:29:32.010 [INFO][5721] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:29:32.020424 containerd[1620]: 2025-05-17 00:29:32.015 [WARNING][5721] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" HandleID="k8s-pod-network.469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--9dd9g-eth0" May 17 00:29:32.020424 containerd[1620]: 2025-05-17 00:29:32.015 [INFO][5721] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" HandleID="k8s-pod-network.469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--9dd9g-eth0" May 17 00:29:32.020424 containerd[1620]: 2025-05-17 00:29:32.016 [INFO][5721] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:29:32.020424 containerd[1620]: 2025-05-17 00:29:32.018 [INFO][5714] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" May 17 00:29:32.022761 containerd[1620]: time="2025-05-17T00:29:32.020465746Z" level=info msg="TearDown network for sandbox \"469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f\" successfully" May 17 00:29:32.022761 containerd[1620]: time="2025-05-17T00:29:32.020488208Z" level=info msg="StopPodSandbox for \"469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f\" returns successfully" May 17 00:29:32.022761 containerd[1620]: time="2025-05-17T00:29:32.020927583Z" level=info msg="RemovePodSandbox for \"469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f\"" May 17 00:29:32.022761 containerd[1620]: time="2025-05-17T00:29:32.020948172Z" level=info msg="Forcibly stopping sandbox \"469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f\"" May 17 00:29:32.083778 containerd[1620]: 2025-05-17 00:29:32.051 [WARNING][5735] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--9dd9g-eth0", GenerateName:"calico-apiserver-65b5bd8c4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"62172615-a35b-40f8-8043-de4d70d023f5", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 28, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65b5bd8c4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-556bea0d1e", ContainerID:"1e86c98fe4cd492a333dd0020c1156979727c796f710d412414d111e189b21a6", Pod:"calico-apiserver-65b5bd8c4b-9dd9g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7051b56b4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:29:32.083778 containerd[1620]: 2025-05-17 00:29:32.051 [INFO][5735] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" May 17 00:29:32.083778 containerd[1620]: 2025-05-17 00:29:32.051 [INFO][5735] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" iface="eth0" netns="" May 17 00:29:32.083778 containerd[1620]: 2025-05-17 00:29:32.051 [INFO][5735] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" May 17 00:29:32.083778 containerd[1620]: 2025-05-17 00:29:32.051 [INFO][5735] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" May 17 00:29:32.083778 containerd[1620]: 2025-05-17 00:29:32.069 [INFO][5742] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" HandleID="k8s-pod-network.469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--9dd9g-eth0" May 17 00:29:32.083778 containerd[1620]: 2025-05-17 00:29:32.069 [INFO][5742] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:29:32.083778 containerd[1620]: 2025-05-17 00:29:32.069 [INFO][5742] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:29:32.083778 containerd[1620]: 2025-05-17 00:29:32.078 [WARNING][5742] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" HandleID="k8s-pod-network.469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--9dd9g-eth0" May 17 00:29:32.083778 containerd[1620]: 2025-05-17 00:29:32.078 [INFO][5742] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" HandleID="k8s-pod-network.469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" Workload="ci--4081--3--3--n--556bea0d1e-k8s-calico--apiserver--65b5bd8c4b--9dd9g-eth0" May 17 00:29:32.083778 containerd[1620]: 2025-05-17 00:29:32.080 [INFO][5742] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:29:32.083778 containerd[1620]: 2025-05-17 00:29:32.082 [INFO][5735] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f" May 17 00:29:32.085121 containerd[1620]: time="2025-05-17T00:29:32.083806183Z" level=info msg="TearDown network for sandbox \"469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f\" successfully" May 17 00:29:32.086812 containerd[1620]: time="2025-05-17T00:29:32.086622743Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:29:32.086812 containerd[1620]: time="2025-05-17T00:29:32.086667417Z" level=info msg="RemovePodSandbox \"469fb90e5cdf48ce4c934dd8995efe7535d98268c6582305f5b9cc1bbb6b542f\" returns successfully" May 17 00:29:32.087253 containerd[1620]: time="2025-05-17T00:29:32.087161153Z" level=info msg="StopPodSandbox for \"78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023\"" May 17 00:29:32.144762 containerd[1620]: 2025-05-17 00:29:32.115 [WARNING][5756] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--2xxwm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cea38a36-7e2f-400d-bcde-dfc6cb61506d", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 28, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-556bea0d1e", ContainerID:"246d2c806ff3ec5e1b65e8a39d58555db65965d6d1989ef05c90c4668d248707", Pod:"coredns-7c65d6cfc9-2xxwm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali38d828d66ed", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:29:32.144762 containerd[1620]: 2025-05-17 00:29:32.116 [INFO][5756] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" May 17 00:29:32.144762 containerd[1620]: 2025-05-17 00:29:32.116 [INFO][5756] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" iface="eth0" netns="" May 17 00:29:32.144762 containerd[1620]: 2025-05-17 00:29:32.116 [INFO][5756] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" May 17 00:29:32.144762 containerd[1620]: 2025-05-17 00:29:32.116 [INFO][5756] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" May 17 00:29:32.144762 containerd[1620]: 2025-05-17 00:29:32.135 [INFO][5764] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" HandleID="k8s-pod-network.78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" Workload="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--2xxwm-eth0" May 17 00:29:32.144762 containerd[1620]: 2025-05-17 00:29:32.135 [INFO][5764] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:29:32.144762 containerd[1620]: 2025-05-17 00:29:32.135 [INFO][5764] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:29:32.144762 containerd[1620]: 2025-05-17 00:29:32.140 [WARNING][5764] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" HandleID="k8s-pod-network.78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" Workload="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--2xxwm-eth0" May 17 00:29:32.144762 containerd[1620]: 2025-05-17 00:29:32.140 [INFO][5764] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" HandleID="k8s-pod-network.78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" Workload="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--2xxwm-eth0" May 17 00:29:32.144762 containerd[1620]: 2025-05-17 00:29:32.141 [INFO][5764] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:29:32.144762 containerd[1620]: 2025-05-17 00:29:32.143 [INFO][5756] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" May 17 00:29:32.145431 containerd[1620]: time="2025-05-17T00:29:32.144909711Z" level=info msg="TearDown network for sandbox \"78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023\" successfully" May 17 00:29:32.145431 containerd[1620]: time="2025-05-17T00:29:32.144932494Z" level=info msg="StopPodSandbox for \"78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023\" returns successfully" May 17 00:29:32.145822 containerd[1620]: time="2025-05-17T00:29:32.145577275Z" level=info msg="RemovePodSandbox for \"78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023\"" May 17 00:29:32.145822 containerd[1620]: time="2025-05-17T00:29:32.145599376Z" level=info msg="Forcibly stopping sandbox \"78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023\"" May 17 00:29:32.206854 containerd[1620]: 2025-05-17 00:29:32.175 [WARNING][5778] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--2xxwm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cea38a36-7e2f-400d-bcde-dfc6cb61506d", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 28, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-556bea0d1e", ContainerID:"246d2c806ff3ec5e1b65e8a39d58555db65965d6d1989ef05c90c4668d248707", Pod:"coredns-7c65d6cfc9-2xxwm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali38d828d66ed", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:29:32.206854 containerd[1620]: 2025-05-17 00:29:32.176 [INFO][5778] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" May 17 00:29:32.206854 containerd[1620]: 2025-05-17 00:29:32.176 [INFO][5778] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" iface="eth0" netns="" May 17 00:29:32.206854 containerd[1620]: 2025-05-17 00:29:32.176 [INFO][5778] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" May 17 00:29:32.206854 containerd[1620]: 2025-05-17 00:29:32.176 [INFO][5778] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" May 17 00:29:32.206854 containerd[1620]: 2025-05-17 00:29:32.195 [INFO][5786] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" HandleID="k8s-pod-network.78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" Workload="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--2xxwm-eth0" May 17 00:29:32.206854 containerd[1620]: 2025-05-17 00:29:32.195 [INFO][5786] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:29:32.206854 containerd[1620]: 2025-05-17 00:29:32.195 [INFO][5786] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:29:32.206854 containerd[1620]: 2025-05-17 00:29:32.199 [WARNING][5786] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" HandleID="k8s-pod-network.78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" Workload="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--2xxwm-eth0" May 17 00:29:32.206854 containerd[1620]: 2025-05-17 00:29:32.201 [INFO][5786] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" HandleID="k8s-pod-network.78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" Workload="ci--4081--3--3--n--556bea0d1e-k8s-coredns--7c65d6cfc9--2xxwm-eth0" May 17 00:29:32.206854 containerd[1620]: 2025-05-17 00:29:32.202 [INFO][5786] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:29:32.206854 containerd[1620]: 2025-05-17 00:29:32.204 [INFO][5778] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023" May 17 00:29:32.206854 containerd[1620]: time="2025-05-17T00:29:32.205761726Z" level=info msg="TearDown network for sandbox \"78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023\" successfully" May 17 00:29:32.209043 containerd[1620]: time="2025-05-17T00:29:32.208758894Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:29:32.209043 containerd[1620]: time="2025-05-17T00:29:32.208806743Z" level=info msg="RemovePodSandbox \"78637948a433285e6fa8a58ed70f2d3570b7dffe22db8a4b90a18330de73c023\" returns successfully" May 17 00:29:34.384015 kubelet[2929]: E0517 00:29:34.382980 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f94896dd9-9mn8s" podUID="edf2dfa8-9e01-4421-aebb-92beec01d94f" May 17 00:29:34.660626 kubelet[2929]: I0517 00:29:34.660049 2929 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:29:38.435746 kubelet[2929]: E0517 00:29:38.435618 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7b9h8" podUID="fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef" May 17 00:29:39.236451 systemd-journald[1178]: Under memory pressure, flushing caches. May 17 00:29:39.233804 systemd-resolved[1509]: Under memory pressure, flushing caches. May 17 00:29:39.233815 systemd-resolved[1509]: Flushed all caches. May 17 00:29:41.285674 systemd-journald[1178]: Under memory pressure, flushing caches. May 17 00:29:41.285383 systemd-resolved[1509]: Under memory pressure, flushing caches. May 17 00:29:41.285394 systemd-resolved[1509]: Flushed all caches. May 17 00:29:46.378834 containerd[1620]: time="2025-05-17T00:29:46.377963813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:29:46.677296 containerd[1620]: time="2025-05-17T00:29:46.677164007Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:29:46.678474 containerd[1620]: time="2025-05-17T00:29:46.678426627Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:29:46.684541 containerd[1620]: time="2025-05-17T00:29:46.684494059Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:29:46.685080 kubelet[2929]: E0517 00:29:46.685034 2929 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:29:46.687877 kubelet[2929]: E0517 00:29:46.687844 2929 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:29:46.730308 kubelet[2929]: E0517 00:29:46.730246 2929 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c907cd8c04324328b30a0d9a2949a437,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-67frm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f94896dd9-9mn8s_calico-system(edf2dfa8-9e01-4421-aebb-92beec01d94f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:29:46.733222 containerd[1620]: time="2025-05-17T00:29:46.733191800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:29:47.059617 containerd[1620]: time="2025-05-17T00:29:47.059564620Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:29:47.062524 containerd[1620]: time="2025-05-17T00:29:47.062489851Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:29:47.062603 containerd[1620]: time="2025-05-17T00:29:47.062567818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:29:47.063288 kubelet[2929]: E0517 00:29:47.062713 2929 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:29:47.063288 kubelet[2929]: E0517 00:29:47.062758 2929 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:29:47.063288 kubelet[2929]: E0517 00:29:47.062846 2929 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-67frm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f94896dd9-9mn8s_calico-system(edf2dfa8-9e01-4421-aebb-92beec01d94f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:29:47.065082 kubelet[2929]: E0517 00:29:47.065048 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5f94896dd9-9mn8s" podUID="edf2dfa8-9e01-4421-aebb-92beec01d94f" May 17 00:29:50.589067 kubelet[2929]: I0517 00:29:50.588574 2929 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:29:52.362743 containerd[1620]: time="2025-05-17T00:29:52.362586075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:29:52.698907 containerd[1620]: time="2025-05-17T00:29:52.698775372Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:29:52.699765 containerd[1620]: time="2025-05-17T00:29:52.699717551Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:29:52.699835 containerd[1620]: time="2025-05-17T00:29:52.699794315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:29:52.699981 kubelet[2929]: E0517 00:29:52.699936 2929 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:29:52.700393 kubelet[2929]: E0517 00:29:52.699989 2929 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:29:52.700393 kubelet[2929]: E0517 00:29:52.700140 2929 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4r67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-7b9h8_calico-system(fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:29:52.701683 kubelet[2929]: E0517 00:29:52.701639 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-7b9h8" podUID="fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef" May 17 00:29:59.385309 kubelet[2929]: E0517 00:29:59.385265 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f94896dd9-9mn8s" podUID="edf2dfa8-9e01-4421-aebb-92beec01d94f" May 17 00:30:06.466278 systemd[1]: run-containerd-runc-k8s.io-b637eb454efe074115eb2909aed94dc9ef2050573134b633ec5acab58f62e73d-runc.5nIIdq.mount: Deactivated successfully. May 17 00:30:06.534801 systemd[1]: run-containerd-runc-k8s.io-b637eb454efe074115eb2909aed94dc9ef2050573134b633ec5acab58f62e73d-runc.Ow1fE5.mount: Deactivated successfully. May 17 00:30:07.364509 kubelet[2929]: E0517 00:30:07.363500 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7b9h8" podUID="fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef" May 17 00:30:12.368180 kubelet[2929]: E0517 00:30:12.368099 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f94896dd9-9mn8s" podUID="edf2dfa8-9e01-4421-aebb-92beec01d94f" May 17 00:30:21.368077 kubelet[2929]: E0517 00:30:21.365959 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7b9h8" podUID="fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef" May 17 00:30:24.367651 kubelet[2929]: E0517 00:30:24.367606 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f94896dd9-9mn8s" podUID="edf2dfa8-9e01-4421-aebb-92beec01d94f" May 17 00:30:33.369496 containerd[1620]: time="2025-05-17T00:30:33.362940074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:30:33.682877 containerd[1620]: time="2025-05-17T00:30:33.682747043Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:30:33.683713 containerd[1620]: time="2025-05-17T00:30:33.683665826Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:30:33.683779 containerd[1620]: time="2025-05-17T00:30:33.683748573Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:30:33.683918 kubelet[2929]: E0517 00:30:33.683884 2929 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:30:33.684282 kubelet[2929]: E0517 00:30:33.683932 2929 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:30:33.684282 kubelet[2929]: E0517 00:30:33.684071 2929 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4r67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-7b9h8_calico-system(fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:30:33.685638 kubelet[2929]: E0517 00:30:33.685592 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-7b9h8" podUID="fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef" May 17 00:30:39.362014 containerd[1620]: time="2025-05-17T00:30:39.361981096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:30:39.671972 containerd[1620]: time="2025-05-17T00:30:39.671837420Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:30:39.672957 containerd[1620]: time="2025-05-17T00:30:39.672881100Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:30:39.672957 containerd[1620]: time="2025-05-17T00:30:39.672918860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:30:39.673335 kubelet[2929]: E0517 00:30:39.673073 2929 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:30:39.673335 kubelet[2929]: E0517 00:30:39.673120 2929 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:30:39.673335 kubelet[2929]: E0517 00:30:39.673218 2929 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c907cd8c04324328b30a0d9a2949a437,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-67frm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f94896dd9-9mn8s_calico-system(edf2dfa8-9e01-4421-aebb-92beec01d94f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:30:39.675241 containerd[1620]: time="2025-05-17T00:30:39.675208651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:30:39.972575 containerd[1620]: time="2025-05-17T00:30:39.972450220Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:30:39.973502 containerd[1620]: time="2025-05-17T00:30:39.973465555Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:30:39.974006 containerd[1620]: time="2025-05-17T00:30:39.973548072Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:30:39.974071 kubelet[2929]: E0517 00:30:39.973683 2929 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:30:39.974071 kubelet[2929]: E0517 00:30:39.973730 2929 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:30:39.974071 kubelet[2929]: E0517 00:30:39.973845 2929 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-67frm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f94896dd9-9mn8s_calico-system(edf2dfa8-9e01-4421-aebb-92beec01d94f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:30:39.975226 kubelet[2929]: E0517 00:30:39.975173 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5f94896dd9-9mn8s" podUID="edf2dfa8-9e01-4421-aebb-92beec01d94f" May 17 00:30:48.376304 kubelet[2929]: E0517 00:30:48.376220 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7b9h8" podUID="fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef" May 17 00:30:52.363171 kubelet[2929]: E0517 00:30:52.362899 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f94896dd9-9mn8s" podUID="edf2dfa8-9e01-4421-aebb-92beec01d94f" May 17 00:31:01.362915 kubelet[2929]: E0517 00:31:01.362676 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7b9h8" podUID="fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef" May 17 00:31:06.366011 kubelet[2929]: E0517 00:31:06.365945 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f94896dd9-9mn8s" podUID="edf2dfa8-9e01-4421-aebb-92beec01d94f" May 17 00:31:06.444378 systemd[1]: run-containerd-runc-k8s.io-b637eb454efe074115eb2909aed94dc9ef2050573134b633ec5acab58f62e73d-runc.waVP1R.mount: Deactivated successfully. May 17 00:31:07.383233 systemd[1]: run-containerd-runc-k8s.io-3f1761130c0f3181aef79309d949df69457b0f59c43febe1a845132d6dd72f10-runc.VtYWBS.mount: Deactivated successfully. May 17 00:31:13.362134 kubelet[2929]: E0517 00:31:13.362082 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7b9h8" podUID="fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef" May 17 00:31:20.363544 kubelet[2929]: E0517 00:31:20.363480 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f94896dd9-9mn8s" podUID="edf2dfa8-9e01-4421-aebb-92beec01d94f" May 17 00:31:27.362056 kubelet[2929]: E0517 00:31:27.361977 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7b9h8" podUID="fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef" May 17 00:31:35.362366 kubelet[2929]: E0517 00:31:35.362261 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f94896dd9-9mn8s" podUID="edf2dfa8-9e01-4421-aebb-92beec01d94f" May 17 00:31:38.364239 kubelet[2929]: E0517 00:31:38.364186 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7b9h8" podUID="fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef" May 17 00:31:48.362865 kubelet[2929]: E0517 00:31:48.362696 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f94896dd9-9mn8s" podUID="edf2dfa8-9e01-4421-aebb-92beec01d94f" May 17 00:31:52.362865 kubelet[2929]: E0517 00:31:52.362797 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7b9h8" podUID="fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef" May 17 00:31:59.362306 kubelet[2929]: E0517 00:31:59.362246 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f94896dd9-9mn8s" podUID="edf2dfa8-9e01-4421-aebb-92beec01d94f" May 17 00:32:05.362686 containerd[1620]: time="2025-05-17T00:32:05.362624882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:32:05.676624 containerd[1620]: time="2025-05-17T00:32:05.676420436Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:32:05.678497 containerd[1620]: time="2025-05-17T00:32:05.678359540Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:32:05.678497 containerd[1620]: time="2025-05-17T00:32:05.678394054Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:32:05.678705 kubelet[2929]: E0517 00:32:05.678645 2929 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:32:05.679233 kubelet[2929]: E0517 00:32:05.678712 2929 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:32:05.679233 kubelet[2929]: E0517 00:32:05.678886 2929 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4r67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-7b9h8_calico-system(fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:32:05.680460 kubelet[2929]: E0517 00:32:05.680402 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-7b9h8" podUID="fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef" May 17 00:32:06.452563 systemd[1]: run-containerd-runc-k8s.io-b637eb454efe074115eb2909aed94dc9ef2050573134b633ec5acab58f62e73d-runc.DHnEpw.mount: Deactivated successfully. May 17 00:32:12.363230 containerd[1620]: time="2025-05-17T00:32:12.362747356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:32:12.674493 containerd[1620]: time="2025-05-17T00:32:12.674199906Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:32:12.675781 containerd[1620]: time="2025-05-17T00:32:12.675725771Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:32:12.676162 containerd[1620]: time="2025-05-17T00:32:12.675767019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:32:12.676243 kubelet[2929]: E0517 00:32:12.676084 2929 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:32:12.676243 kubelet[2929]: E0517 00:32:12.676156 2929 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:32:12.676974 kubelet[2929]: E0517 00:32:12.676298 2929 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c907cd8c04324328b30a0d9a2949a437,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-67frm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f94896dd9-9mn8s_calico-system(edf2dfa8-9e01-4421-aebb-92beec01d94f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:32:12.678995 containerd[1620]: time="2025-05-17T00:32:12.678906454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:32:12.983927 containerd[1620]: time="2025-05-17T00:32:12.983706520Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:32:12.985329 containerd[1620]: time="2025-05-17T00:32:12.985091891Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:32:12.985329 containerd[1620]: time="2025-05-17T00:32:12.985190286Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:32:12.985574 kubelet[2929]: E0517 00:32:12.985398 2929 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:32:12.985574 kubelet[2929]: E0517 00:32:12.985468 2929 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:32:12.986187 kubelet[2929]: E0517 00:32:12.985698 2929 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-67frm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f94896dd9-9mn8s_calico-system(edf2dfa8-9e01-4421-aebb-92beec01d94f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:32:12.987427 kubelet[2929]: E0517 00:32:12.987365 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5f94896dd9-9mn8s" podUID="edf2dfa8-9e01-4421-aebb-92beec01d94f" May 17 00:32:19.362582 kubelet[2929]: E0517 00:32:19.362408 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7b9h8" podUID="fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef" May 17 00:32:24.363066 kubelet[2929]: E0517 00:32:24.363005 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f94896dd9-9mn8s" podUID="edf2dfa8-9e01-4421-aebb-92beec01d94f" May 17 00:32:34.364068 kubelet[2929]: E0517 00:32:34.363592 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7b9h8" podUID="fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef" May 17 00:32:39.363750 kubelet[2929]: E0517 00:32:39.363690 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f94896dd9-9mn8s" podUID="edf2dfa8-9e01-4421-aebb-92beec01d94f" May 17 00:32:46.362608 kubelet[2929]: E0517 00:32:46.362486 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7b9h8" podUID="fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef" May 17 00:32:54.380449 kubelet[2929]: E0517 00:32:54.380395 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f94896dd9-9mn8s" podUID="edf2dfa8-9e01-4421-aebb-92beec01d94f" May 17 00:32:58.362300 kubelet[2929]: E0517 00:32:58.362258 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7b9h8" podUID="fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef" May 17 00:33:08.362592 kubelet[2929]: E0517 00:33:08.362536 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f94896dd9-9mn8s" podUID="edf2dfa8-9e01-4421-aebb-92beec01d94f" May 17 00:33:11.362234 kubelet[2929]: E0517 00:33:11.362180 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7b9h8" podUID="fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef" May 17 00:33:20.789348 systemd[1]: Started sshd@7-135.181.90.190:22-139.178.89.65:53992.service - OpenSSH per-connection server daemon (139.178.89.65:53992). May 17 00:33:21.363610 kubelet[2929]: E0517 00:33:21.363541 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f94896dd9-9mn8s" podUID="edf2dfa8-9e01-4421-aebb-92beec01d94f" May 17 00:33:21.780490 sshd[6303]: Accepted publickey for core from 139.178.89.65 port 53992 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:33:21.784221 sshd[6303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:33:21.805080 systemd-logind[1593]: New session 8 of user core. May 17 00:33:21.808543 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 00:33:22.960223 sshd[6303]: pam_unix(sshd:session): session closed for user core May 17 00:33:22.966865 systemd[1]: sshd@7-135.181.90.190:22-139.178.89.65:53992.service: Deactivated successfully. May 17 00:33:22.972172 systemd-logind[1593]: Session 8 logged out. Waiting for processes to exit. May 17 00:33:22.972543 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:33:22.975487 systemd-logind[1593]: Removed session 8. May 17 00:33:23.361875 kubelet[2929]: E0517 00:33:23.361763 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7b9h8" podUID="fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef" May 17 00:33:28.130233 systemd[1]: Started sshd@8-135.181.90.190:22-139.178.89.65:38338.service - OpenSSH per-connection server daemon (139.178.89.65:38338). May 17 00:33:29.132191 sshd[6319]: Accepted publickey for core from 139.178.89.65 port 38338 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:33:29.136524 sshd[6319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:33:29.143948 systemd-logind[1593]: New session 9 of user core. May 17 00:33:29.149123 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 00:33:30.214129 sshd[6319]: pam_unix(sshd:session): session closed for user core May 17 00:33:30.216699 systemd[1]: sshd@8-135.181.90.190:22-139.178.89.65:38338.service: Deactivated successfully. May 17 00:33:30.220451 systemd-logind[1593]: Session 9 logged out. Waiting for processes to exit. May 17 00:33:30.221885 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:33:30.223203 systemd-logind[1593]: Removed session 9. May 17 00:33:35.379220 systemd[1]: Started sshd@9-135.181.90.190:22-139.178.89.65:38350.service - OpenSSH per-connection server daemon (139.178.89.65:38350). May 17 00:33:35.380697 kubelet[2929]: E0517 00:33:35.379474 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7b9h8" podUID="fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef" May 17 00:33:35.389118 kubelet[2929]: E0517 00:33:35.389093 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f94896dd9-9mn8s" podUID="edf2dfa8-9e01-4421-aebb-92beec01d94f" May 17 00:33:36.342389 sshd[6336]: Accepted publickey for core from 139.178.89.65 port 38350 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:33:36.343693 sshd[6336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:33:36.348109 systemd-logind[1593]: New session 10 of user core. May 17 00:33:36.353222 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 00:33:36.444147 systemd[1]: run-containerd-runc-k8s.io-b637eb454efe074115eb2909aed94dc9ef2050573134b633ec5acab58f62e73d-runc.6KiaDw.mount: Deactivated successfully. May 17 00:33:37.089724 sshd[6336]: pam_unix(sshd:session): session closed for user core May 17 00:33:37.092310 systemd[1]: sshd@9-135.181.90.190:22-139.178.89.65:38350.service: Deactivated successfully. May 17 00:33:37.095456 systemd-logind[1593]: Session 10 logged out. Waiting for processes to exit. May 17 00:33:37.095932 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:33:37.097411 systemd-logind[1593]: Removed session 10. May 17 00:33:37.252285 systemd[1]: Started sshd@10-135.181.90.190:22-139.178.89.65:36446.service - OpenSSH per-connection server daemon (139.178.89.65:36446). May 17 00:33:38.222983 sshd[6370]: Accepted publickey for core from 139.178.89.65 port 36446 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:33:38.224289 sshd[6370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:33:38.228542 systemd-logind[1593]: New session 11 of user core. May 17 00:33:38.237247 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 00:33:39.053682 sshd[6370]: pam_unix(sshd:session): session closed for user core May 17 00:33:39.056242 systemd[1]: sshd@10-135.181.90.190:22-139.178.89.65:36446.service: Deactivated successfully. May 17 00:33:39.059412 systemd-logind[1593]: Session 11 logged out. Waiting for processes to exit. May 17 00:33:39.059773 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:33:39.061429 systemd-logind[1593]: Removed session 11. May 17 00:33:39.214390 systemd[1]: Started sshd@11-135.181.90.190:22-139.178.89.65:36462.service - OpenSSH per-connection server daemon (139.178.89.65:36462). May 17 00:33:40.180519 sshd[6407]: Accepted publickey for core from 139.178.89.65 port 36462 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:33:40.181734 sshd[6407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:33:40.186074 systemd-logind[1593]: New session 12 of user core. May 17 00:33:40.193251 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 00:33:40.907306 sshd[6407]: pam_unix(sshd:session): session closed for user core May 17 00:33:40.909965 systemd[1]: sshd@11-135.181.90.190:22-139.178.89.65:36462.service: Deactivated successfully. May 17 00:33:40.913980 systemd-logind[1593]: Session 12 logged out. Waiting for processes to exit. May 17 00:33:40.914494 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:33:40.916793 systemd-logind[1593]: Removed session 12. May 17 00:33:46.071395 systemd[1]: Started sshd@12-135.181.90.190:22-139.178.89.65:36474.service - OpenSSH per-connection server daemon (139.178.89.65:36474). May 17 00:33:47.048376 sshd[6432]: Accepted publickey for core from 139.178.89.65 port 36474 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:33:47.049688 sshd[6432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:33:47.054061 systemd-logind[1593]: New session 13 of user core. May 17 00:33:47.057257 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 00:33:47.803504 sshd[6432]: pam_unix(sshd:session): session closed for user core May 17 00:33:47.807141 systemd[1]: sshd@12-135.181.90.190:22-139.178.89.65:36474.service: Deactivated successfully. May 17 00:33:47.811341 systemd-logind[1593]: Session 13 logged out. Waiting for processes to exit. May 17 00:33:47.811822 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:33:47.814010 systemd-logind[1593]: Removed session 13. May 17 00:33:48.362142 kubelet[2929]: E0517 00:33:48.362084 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7b9h8" podUID="fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef" May 17 00:33:49.363215 kubelet[2929]: E0517 00:33:49.363171 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f94896dd9-9mn8s" podUID="edf2dfa8-9e01-4421-aebb-92beec01d94f" May 17 00:33:52.967240 systemd[1]: Started sshd@13-135.181.90.190:22-139.178.89.65:46554.service - OpenSSH per-connection server daemon (139.178.89.65:46554). May 17 00:33:53.931001 sshd[6460]: Accepted publickey for core from 139.178.89.65 port 46554 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:33:53.932494 sshd[6460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:33:53.937119 systemd-logind[1593]: New session 14 of user core. May 17 00:33:53.941304 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 00:33:54.685169 sshd[6460]: pam_unix(sshd:session): session closed for user core May 17 00:33:54.688442 systemd[1]: sshd@13-135.181.90.190:22-139.178.89.65:46554.service: Deactivated successfully. May 17 00:33:54.693220 systemd-logind[1593]: Session 14 logged out. Waiting for processes to exit. May 17 00:33:54.695926 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:33:54.697634 systemd-logind[1593]: Removed session 14. May 17 00:33:59.849677 systemd[1]: Started sshd@14-135.181.90.190:22-139.178.89.65:58454.service - OpenSSH per-connection server daemon (139.178.89.65:58454). May 17 00:34:00.362989 kubelet[2929]: E0517 00:34:00.362954 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f94896dd9-9mn8s" podUID="edf2dfa8-9e01-4421-aebb-92beec01d94f" May 17 00:34:00.823525 sshd[6474]: Accepted publickey for core from 139.178.89.65 port 58454 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:34:00.824863 sshd[6474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:34:00.829324 systemd-logind[1593]: New session 15 of user core. May 17 00:34:00.835273 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 00:34:01.549187 sshd[6474]: pam_unix(sshd:session): session closed for user core May 17 00:34:01.552492 systemd-logind[1593]: Session 15 logged out. Waiting for processes to exit. May 17 00:34:01.553349 systemd[1]: sshd@14-135.181.90.190:22-139.178.89.65:58454.service: Deactivated successfully. May 17 00:34:01.556421 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:34:01.557434 systemd-logind[1593]: Removed session 15. May 17 00:34:01.711455 systemd[1]: Started sshd@15-135.181.90.190:22-139.178.89.65:58464.service - OpenSSH per-connection server daemon (139.178.89.65:58464). May 17 00:34:02.363221 kubelet[2929]: E0517 00:34:02.363128 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7b9h8" podUID="fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef" May 17 00:34:02.678672 sshd[6488]: Accepted publickey for core from 139.178.89.65 port 58464 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:34:02.679917 sshd[6488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:34:02.684522 systemd-logind[1593]: New session 16 of user core. May 17 00:34:02.688277 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 00:34:03.624739 sshd[6488]: pam_unix(sshd:session): session closed for user core May 17 00:34:03.630564 systemd[1]: sshd@15-135.181.90.190:22-139.178.89.65:58464.service: Deactivated successfully. May 17 00:34:03.636183 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:34:03.637054 systemd-logind[1593]: Session 16 logged out. Waiting for processes to exit. May 17 00:34:03.638812 systemd-logind[1593]: Removed session 16. May 17 00:34:03.788786 systemd[1]: Started sshd@16-135.181.90.190:22-139.178.89.65:58466.service - OpenSSH per-connection server daemon (139.178.89.65:58466). May 17 00:34:04.782103 sshd[6500]: Accepted publickey for core from 139.178.89.65 port 58466 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:34:04.784452 sshd[6500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:34:04.791200 systemd-logind[1593]: New session 17 of user core. May 17 00:34:04.797761 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 00:34:07.398145 systemd-journald[1178]: Under memory pressure, flushing caches. May 17 00:34:07.391107 systemd-resolved[1509]: Under memory pressure, flushing caches. May 17 00:34:07.391131 systemd-resolved[1509]: Flushed all caches. May 17 00:34:07.551567 sshd[6500]: pam_unix(sshd:session): session closed for user core May 17 00:34:07.559256 systemd[1]: sshd@16-135.181.90.190:22-139.178.89.65:58466.service: Deactivated successfully. May 17 00:34:07.566379 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:34:07.566923 systemd-logind[1593]: Session 17 logged out. Waiting for processes to exit. May 17 00:34:07.570973 systemd-logind[1593]: Removed session 17. May 17 00:34:07.710270 systemd[1]: Started sshd@17-135.181.90.190:22-139.178.89.65:36692.service - OpenSSH per-connection server daemon (139.178.89.65:36692). May 17 00:34:08.689415 sshd[6593]: Accepted publickey for core from 139.178.89.65 port 36692 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:34:08.691868 sshd[6593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:34:08.696171 systemd-logind[1593]: New session 18 of user core. May 17 00:34:08.702232 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 00:34:10.024586 sshd[6593]: pam_unix(sshd:session): session closed for user core May 17 00:34:10.027406 systemd[1]: sshd@17-135.181.90.190:22-139.178.89.65:36692.service: Deactivated successfully. May 17 00:34:10.032655 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:34:10.033460 systemd-logind[1593]: Session 18 logged out. Waiting for processes to exit. May 17 00:34:10.037430 systemd-logind[1593]: Removed session 18. May 17 00:34:10.189423 systemd[1]: Started sshd@18-135.181.90.190:22-139.178.89.65:36696.service - OpenSSH per-connection server daemon (139.178.89.65:36696). May 17 00:34:11.185601 sshd[6607]: Accepted publickey for core from 139.178.89.65 port 36696 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:34:11.187919 sshd[6607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:34:11.193873 systemd-logind[1593]: New session 19 of user core. May 17 00:34:11.196420 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 00:34:11.991386 sshd[6607]: pam_unix(sshd:session): session closed for user core May 17 00:34:11.994759 systemd[1]: sshd@18-135.181.90.190:22-139.178.89.65:36696.service: Deactivated successfully. May 17 00:34:11.994978 systemd-logind[1593]: Session 19 logged out. Waiting for processes to exit. May 17 00:34:12.001599 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:34:12.004366 systemd-logind[1593]: Removed session 19. May 17 00:34:15.417510 kubelet[2929]: E0517 00:34:15.417441 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f94896dd9-9mn8s" podUID="edf2dfa8-9e01-4421-aebb-92beec01d94f" May 17 00:34:16.364916 kubelet[2929]: E0517 00:34:16.364266 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7b9h8" podUID="fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef" May 17 00:34:17.154251 systemd[1]: Started sshd@19-135.181.90.190:22-139.178.89.65:47906.service - OpenSSH per-connection server daemon (139.178.89.65:47906). May 17 00:34:18.134906 sshd[6625]: Accepted publickey for core from 139.178.89.65 port 47906 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:34:18.137206 sshd[6625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:34:18.141585 systemd-logind[1593]: New session 20 of user core. May 17 00:34:18.145349 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 00:34:18.875214 sshd[6625]: pam_unix(sshd:session): session closed for user core May 17 00:34:18.877591 systemd[1]: sshd@19-135.181.90.190:22-139.178.89.65:47906.service: Deactivated successfully. May 17 00:34:18.880482 systemd-logind[1593]: Session 20 logged out. Waiting for processes to exit. May 17 00:34:18.881485 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:34:18.882922 systemd-logind[1593]: Removed session 20. May 17 00:34:24.042312 systemd[1]: Started sshd@20-135.181.90.190:22-139.178.89.65:47910.service - OpenSSH per-connection server daemon (139.178.89.65:47910). May 17 00:34:25.017653 sshd[6645]: Accepted publickey for core from 139.178.89.65 port 47910 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:34:25.023356 sshd[6645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:34:25.034179 systemd-logind[1593]: New session 21 of user core. May 17 00:34:25.038631 systemd[1]: Started session-21.scope - Session 21 of User core. May 17 00:34:25.818702 sshd[6645]: pam_unix(sshd:session): session closed for user core May 17 00:34:25.822778 systemd[1]: sshd@20-135.181.90.190:22-139.178.89.65:47910.service: Deactivated successfully. May 17 00:34:25.823443 systemd-logind[1593]: Session 21 logged out. Waiting for processes to exit. May 17 00:34:25.831011 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:34:25.833199 systemd-logind[1593]: Removed session 21. May 17 00:34:30.364882 kubelet[2929]: E0517 00:34:30.364821 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f94896dd9-9mn8s" podUID="edf2dfa8-9e01-4421-aebb-92beec01d94f" May 17 00:34:31.362599 kubelet[2929]: E0517 00:34:31.362530 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7b9h8" podUID="fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef" May 17 00:34:40.565519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15f521a7362867e160d8ff0e849f40e7dfbe980ac76ead613ed8ea6e39a97f31-rootfs.mount: Deactivated successfully. May 17 00:34:40.634199 containerd[1620]: time="2025-05-17T00:34:40.600004855Z" level=info msg="shim disconnected" id=15f521a7362867e160d8ff0e849f40e7dfbe980ac76ead613ed8ea6e39a97f31 namespace=k8s.io May 17 00:34:40.634199 containerd[1620]: time="2025-05-17T00:34:40.634192712Z" level=warning msg="cleaning up after shim disconnected" id=15f521a7362867e160d8ff0e849f40e7dfbe980ac76ead613ed8ea6e39a97f31 namespace=k8s.io May 17 00:34:40.634199 containerd[1620]: time="2025-05-17T00:34:40.634203933Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:34:40.741221 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-240b133e79ca4f782053f034b72be079bfc85cc3f595a08a37d0af20fd20d72a-rootfs.mount: Deactivated successfully. May 17 00:34:40.745520 containerd[1620]: time="2025-05-17T00:34:40.743507350Z" level=info msg="shim disconnected" id=240b133e79ca4f782053f034b72be079bfc85cc3f595a08a37d0af20fd20d72a namespace=k8s.io May 17 00:34:40.745520 containerd[1620]: time="2025-05-17T00:34:40.743566431Z" level=warning msg="cleaning up after shim disconnected" id=240b133e79ca4f782053f034b72be079bfc85cc3f595a08a37d0af20fd20d72a namespace=k8s.io May 17 00:34:40.745520 containerd[1620]: time="2025-05-17T00:34:40.743584756Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:34:40.789164 kubelet[2929]: I0517 00:34:40.789137 2929 scope.go:117] "RemoveContainer" containerID="15f521a7362867e160d8ff0e849f40e7dfbe980ac76ead613ed8ea6e39a97f31" May 17 00:34:40.826240 kubelet[2929]: E0517 00:34:40.823972 2929 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:34708->10.0.0.2:2379: read: connection timed out" May 17 00:34:40.834511 containerd[1620]: time="2025-05-17T00:34:40.834010667Z" level=info msg="shim disconnected" id=7988a39059897c1d9b0ae14562848c2f6a7dde6f96d2fc3e0c78731cf122275a namespace=k8s.io May 17 00:34:40.834511 containerd[1620]: time="2025-05-17T00:34:40.834080228Z" level=warning msg="cleaning up after shim disconnected" id=7988a39059897c1d9b0ae14562848c2f6a7dde6f96d2fc3e0c78731cf122275a namespace=k8s.io May 17 00:34:40.834511 containerd[1620]: time="2025-05-17T00:34:40.834088133Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:34:40.836819 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7988a39059897c1d9b0ae14562848c2f6a7dde6f96d2fc3e0c78731cf122275a-rootfs.mount: Deactivated successfully. May 17 00:34:40.853540 containerd[1620]: time="2025-05-17T00:34:40.853181113Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:34:40Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 17 00:34:40.854791 containerd[1620]: time="2025-05-17T00:34:40.853761965Z" level=info msg="CreateContainer within sandbox \"107dee571c785fc1425fe945b6d67f4c0c90325285886ef7bd47e1fd96dcfa13\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 17 00:34:40.963799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3740515010.mount: Deactivated successfully. May 17 00:34:40.985028 containerd[1620]: time="2025-05-17T00:34:40.984985976Z" level=info msg="CreateContainer within sandbox \"107dee571c785fc1425fe945b6d67f4c0c90325285886ef7bd47e1fd96dcfa13\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"9f0f9a256f496808656cdb876034663c9e5815a76cbd5f4c6f6a3c79a03ed797\"" May 17 00:34:40.989517 containerd[1620]: time="2025-05-17T00:34:40.989484960Z" level=info msg="StartContainer for \"9f0f9a256f496808656cdb876034663c9e5815a76cbd5f4c6f6a3c79a03ed797\"" May 17 00:34:41.069845 containerd[1620]: time="2025-05-17T00:34:41.068757113Z" level=info msg="StartContainer for \"9f0f9a256f496808656cdb876034663c9e5815a76cbd5f4c6f6a3c79a03ed797\" returns successfully" May 17 00:34:41.568399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2541160807.mount: Deactivated successfully. May 17 00:34:41.779296 kubelet[2929]: I0517 00:34:41.779266 2929 scope.go:117] "RemoveContainer" containerID="7988a39059897c1d9b0ae14562848c2f6a7dde6f96d2fc3e0c78731cf122275a" May 17 00:34:41.782032 kubelet[2929]: I0517 00:34:41.781431 2929 scope.go:117] "RemoveContainer" containerID="240b133e79ca4f782053f034b72be079bfc85cc3f595a08a37d0af20fd20d72a" May 17 00:34:41.788037 containerd[1620]: time="2025-05-17T00:34:41.787409098Z" level=info msg="CreateContainer within sandbox \"712b7800a5cb02bd1be3c75fa1291d840fe1488493de5be70579a21ffd39a57f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 17 00:34:41.828399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4030811252.mount: Deactivated successfully. May 17 00:34:41.845468 containerd[1620]: time="2025-05-17T00:34:41.845437559Z" level=info msg="CreateContainer within sandbox \"712b7800a5cb02bd1be3c75fa1291d840fe1488493de5be70579a21ffd39a57f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"7cfb5b171eb84d8e77847e884ed092895a7da6e4eada91e52c463a4c2c2ae897\"" May 17 00:34:41.846169 containerd[1620]: time="2025-05-17T00:34:41.846152161Z" level=info msg="StartContainer for \"7cfb5b171eb84d8e77847e884ed092895a7da6e4eada91e52c463a4c2c2ae897\"" May 17 00:34:41.877564 containerd[1620]: time="2025-05-17T00:34:41.877520871Z" level=info msg="CreateContainer within sandbox \"80464ddd3ee7dc3b9cde63a19b216ab7280ea5d7676dd9475dcc2068abc59f0a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" May 17 00:34:41.893162 containerd[1620]: time="2025-05-17T00:34:41.892797620Z" level=info msg="CreateContainer within sandbox \"80464ddd3ee7dc3b9cde63a19b216ab7280ea5d7676dd9475dcc2068abc59f0a\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"cb0c6d1d8902f0539247a4b41b2195f23f730f77d22164cb0389b026fac1f4a7\"" May 17 00:34:41.893457 containerd[1620]: time="2025-05-17T00:34:41.893441380Z" level=info msg="StartContainer for \"cb0c6d1d8902f0539247a4b41b2195f23f730f77d22164cb0389b026fac1f4a7\"" May 17 00:34:41.947208 containerd[1620]: time="2025-05-17T00:34:41.947179447Z" level=info msg="StartContainer for \"7cfb5b171eb84d8e77847e884ed092895a7da6e4eada91e52c463a4c2c2ae897\" returns successfully" May 17 00:34:41.965518 containerd[1620]: time="2025-05-17T00:34:41.964640360Z" level=info msg="StartContainer for \"cb0c6d1d8902f0539247a4b41b2195f23f730f77d22164cb0389b026fac1f4a7\" returns successfully" May 17 00:34:43.362363 kubelet[2929]: E0517 00:34:43.362292 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f94896dd9-9mn8s" podUID="edf2dfa8-9e01-4421-aebb-92beec01d94f" May 17 00:34:44.362161 kubelet[2929]: E0517 00:34:44.362085 2929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7b9h8" podUID="fbe11bdf-a1f6-4b6d-b48f-fb37ea5d6cef" May 17 00:34:45.021330 kubelet[2929]: E0517 00:34:44.963624 2929 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:34472->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-3-n-556bea0d1e.1840294ef1598eb6 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-3-n-556bea0d1e,UID:bfe7a7960296c153785a2e1f59dc14fe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-556bea0d1e,},FirstTimestamp:2025-05-17 00:34:34.453118646 +0000 UTC m=+364.175548962,LastTimestamp:2025-05-17 00:34:34.453118646 +0000 UTC m=+364.175548962,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-556bea0d1e,}" May 17 00:34:47.426821 kubelet[2929]: E0517 00:34:47.426750 2929 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-05-17T00:34:37Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-05-17T00:34:37Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-05-17T00:34:37Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-05-17T00:34:37Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"ci-4081-3-3-n-556bea0d1e\": Patch \"https://135.181.90.190:6443/api/v1/nodes/ci-4081-3-3-n-556bea0d1e/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 17 00:34:47.657672 kubelet[2929]: E0517 00:34:47.657603 2929 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-3-n-556bea0d1e\": rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:34604->10.0.0.2:2379: read: connection timed out"