May 17 00:21:15.854963 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 22:44:56 -00 2025 May 17 00:21:15.854985 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:21:15.854993 kernel: BIOS-provided physical RAM map: May 17 00:21:15.854998 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 17 00:21:15.855002 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 17 00:21:15.855006 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 17 00:21:15.855012 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable May 17 00:21:15.855017 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved May 17 00:21:15.855023 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 17 00:21:15.855028 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 17 00:21:15.855033 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 17 00:21:15.855037 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 17 00:21:15.855042 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 17 00:21:15.855046 kernel: NX (Execute Disable) protection: active May 17 00:21:15.855053 kernel: APIC: Static calls initialized May 17 00:21:15.855058 kernel: SMBIOS 3.0.0 present. May 17 00:21:15.855063 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 May 17 00:21:15.855068 kernel: Hypervisor detected: KVM May 17 00:21:15.855073 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:21:15.855078 kernel: kvm-clock: using sched offset of 2954754326 cycles May 17 00:21:15.855083 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:21:15.855089 kernel: tsc: Detected 2445.406 MHz processor May 17 00:21:15.855094 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:21:15.855101 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:21:15.855106 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 May 17 00:21:15.855121 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 17 00:21:15.855126 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:21:15.855132 kernel: Using GB pages for direct mapping May 17 00:21:15.855137 kernel: ACPI: Early table checksum verification disabled May 17 00:21:15.855142 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) May 17 00:21:15.855147 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:21:15.855152 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:21:15.855158 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:21:15.855164 kernel: ACPI: FACS 0x000000007CFE0000 000040 May 17 00:21:15.855169 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:21:15.855174 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:21:15.855179 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:21:15.855184 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:21:15.855189 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] May 17 00:21:15.855194 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] May 17 00:21:15.855203 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] May 17 00:21:15.855208 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] May 17 00:21:15.855214 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] May 17 00:21:15.855219 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] May 17 00:21:15.855224 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] May 17 00:21:15.855230 kernel: No NUMA configuration found May 17 00:21:15.855236 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] May 17 00:21:15.855241 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] May 17 00:21:15.855247 kernel: Zone ranges: May 17 00:21:15.855252 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:21:15.855258 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] May 17 00:21:15.855263 kernel: Normal empty May 17 00:21:15.855273 kernel: Movable zone start for each node May 17 00:21:15.855289 kernel: Early memory node ranges May 17 00:21:15.855304 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 17 00:21:15.855319 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] May 17 00:21:15.855343 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] May 17 00:21:15.855358 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:21:15.855377 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 17 00:21:15.855392 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 17 00:21:15.855405 kernel: ACPI: PM-Timer IO Port: 0x608 May 17 00:21:15.855410 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:21:15.855415 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 17 00:21:15.855421 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 00:21:15.855426 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:21:15.855432 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:21:15.855438 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:21:15.855443 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:21:15.855449 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:21:15.855454 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:21:15.855459 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 00:21:15.855465 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 17 00:21:15.855470 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 17 00:21:15.855475 kernel: Booting paravirtualized kernel on KVM May 17 00:21:15.855482 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:21:15.855488 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 17 00:21:15.855493 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 17 00:21:15.855499 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 17 00:21:15.855504 kernel: pcpu-alloc: [0] 0 1 May 17 00:21:15.855509 kernel: kvm-guest: PV spinlocks disabled, no host support May 17 00:21:15.855515 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:21:15.855521 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:21:15.855527 kernel: random: crng init done May 17 00:21:15.855542 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:21:15.855555 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 17 00:21:15.855561 kernel: Fallback order for Node 0: 0 May 17 00:21:15.855566 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 May 17 00:21:15.855571 kernel: Policy zone: DMA32 May 17 00:21:15.855577 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:21:15.855583 kernel: Memory: 1922052K/2047464K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42872K init, 2320K bss, 125152K reserved, 0K cma-reserved) May 17 00:21:15.855588 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:21:15.855595 kernel: ftrace: allocating 37948 entries in 149 pages May 17 00:21:15.855601 kernel: ftrace: allocated 149 pages with 4 groups May 17 00:21:15.855606 kernel: Dynamic Preempt: voluntary May 17 00:21:15.855611 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:21:15.855617 kernel: rcu: RCU event tracing is enabled. May 17 00:21:15.855623 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:21:15.857567 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:21:15.857575 kernel: Rude variant of Tasks RCU enabled. May 17 00:21:15.857581 kernel: Tracing variant of Tasks RCU enabled. May 17 00:21:15.857587 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:21:15.857597 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:21:15.857602 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 17 00:21:15.857612 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:21:15.857618 kernel: Console: colour VGA+ 80x25 May 17 00:21:15.857624 kernel: printk: console [tty0] enabled May 17 00:21:15.857651 kernel: printk: console [ttyS0] enabled May 17 00:21:15.857664 kernel: ACPI: Core revision 20230628 May 17 00:21:15.857675 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 17 00:21:15.857685 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:21:15.857700 kernel: x2apic enabled May 17 00:21:15.857709 kernel: APIC: Switched APIC routing to: physical x2apic May 17 00:21:15.857715 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 17 00:21:15.857720 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 17 00:21:15.857726 kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406) May 17 00:21:15.857731 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 17 00:21:15.857737 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 17 00:21:15.857742 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 17 00:21:15.857755 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:21:15.857761 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:21:15.857766 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:21:15.857774 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 17 00:21:15.857780 kernel: RETBleed: Mitigation: untrained return thunk May 17 00:21:15.857785 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:21:15.857791 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 17 00:21:15.857797 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:21:15.857803 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:21:15.857810 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:21:15.857827 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:21:15.857833 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 17 00:21:15.857839 kernel: Freeing SMP alternatives memory: 32K May 17 00:21:15.857844 kernel: pid_max: default: 32768 minimum: 301 May 17 00:21:15.857850 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:21:15.857856 kernel: landlock: Up and running. May 17 00:21:15.857861 kernel: SELinux: Initializing. May 17 00:21:15.857868 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:21:15.857874 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:21:15.857880 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) May 17 00:21:15.857886 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:21:15.857892 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:21:15.857897 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:21:15.857903 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 17 00:21:15.857909 kernel: ... version: 0 May 17 00:21:15.857915 kernel: ... bit width: 48 May 17 00:21:15.857922 kernel: ... generic registers: 6 May 17 00:21:15.857927 kernel: ... value mask: 0000ffffffffffff May 17 00:21:15.857941 kernel: ... max period: 00007fffffffffff May 17 00:21:15.857947 kernel: ... fixed-purpose events: 0 May 17 00:21:15.857953 kernel: ... event mask: 000000000000003f May 17 00:21:15.857959 kernel: signal: max sigframe size: 1776 May 17 00:21:15.857964 kernel: rcu: Hierarchical SRCU implementation. May 17 00:21:15.857970 kernel: rcu: Max phase no-delay instances is 400. May 17 00:21:15.857976 kernel: smp: Bringing up secondary CPUs ... May 17 00:21:15.857983 kernel: smpboot: x86: Booting SMP configuration: May 17 00:21:15.857989 kernel: .... node #0, CPUs: #1 May 17 00:21:15.857994 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:21:15.858000 kernel: smpboot: Max logical packages: 1 May 17 00:21:15.858005 kernel: smpboot: Total of 2 processors activated (9781.62 BogoMIPS) May 17 00:21:15.858011 kernel: devtmpfs: initialized May 17 00:21:15.858016 kernel: x86/mm: Memory block size: 128MB May 17 00:21:15.858022 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:21:15.858028 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:21:15.858035 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:21:15.858041 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:21:15.858046 kernel: audit: initializing netlink subsys (disabled) May 17 00:21:15.858052 kernel: audit: type=2000 audit(1747441275.129:1): state=initialized audit_enabled=0 res=1 May 17 00:21:15.858057 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:21:15.858063 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:21:15.858069 kernel: cpuidle: using governor menu May 17 00:21:15.858074 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:21:15.858080 kernel: dca service started, version 1.12.1 May 17 00:21:15.858087 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 17 00:21:15.858093 kernel: PCI: Using configuration type 1 for base access May 17 00:21:15.858099 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:21:15.858104 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:21:15.858110 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:21:15.858133 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:21:15.858138 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:21:15.858144 kernel: ACPI: Added _OSI(Module Device) May 17 00:21:15.858150 kernel: ACPI: Added _OSI(Processor Device) May 17 00:21:15.858157 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:21:15.858163 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:21:15.858169 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:21:15.858174 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 17 00:21:15.858180 kernel: ACPI: Interpreter enabled May 17 00:21:15.858185 kernel: ACPI: PM: (supports S0 S5) May 17 00:21:15.858191 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:21:15.858197 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:21:15.858203 kernel: PCI: Using E820 reservations for host bridge windows May 17 00:21:15.858210 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 17 00:21:15.858215 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:21:15.858353 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:21:15.858428 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 17 00:21:15.858493 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 17 00:21:15.858502 kernel: PCI host bridge to bus 0000:00 May 17 00:21:15.858568 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:21:15.858662 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:21:15.858730 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:21:15.858813 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] May 17 00:21:15.859513 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 17 00:21:15.859608 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 17 00:21:15.859691 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:21:15.859780 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 17 00:21:15.859908 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 May 17 00:21:15.860009 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] May 17 00:21:15.860101 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] May 17 00:21:15.860189 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] May 17 00:21:15.860289 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] May 17 00:21:15.860368 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:21:15.860451 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 May 17 00:21:15.860519 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] May 17 00:21:15.860591 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 May 17 00:21:15.861418 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] May 17 00:21:15.861530 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 May 17 00:21:15.861615 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] May 17 00:21:15.861726 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 May 17 00:21:15.861804 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] May 17 00:21:15.861879 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 May 17 00:21:15.861946 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] May 17 00:21:15.862017 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 May 17 00:21:15.862101 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] May 17 00:21:15.862210 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 May 17 00:21:15.862284 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] May 17 00:21:15.862359 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 May 17 00:21:15.862435 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] May 17 00:21:15.862525 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 May 17 00:21:15.862606 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] May 17 00:21:15.862695 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 17 00:21:15.862783 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 17 00:21:15.862855 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 17 00:21:15.862918 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] May 17 00:21:15.862980 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] May 17 00:21:15.863048 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 17 00:21:15.863123 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 17 00:21:15.863205 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 May 17 00:21:15.863271 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] May 17 00:21:15.863337 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] May 17 00:21:15.863400 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] May 17 00:21:15.863465 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 17 00:21:15.863528 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] May 17 00:21:15.863589 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] May 17 00:21:15.865750 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 May 17 00:21:15.865836 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] May 17 00:21:15.865904 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 17 00:21:15.865969 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] May 17 00:21:15.866031 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] May 17 00:21:15.866102 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 May 17 00:21:15.866194 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] May 17 00:21:15.866262 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] May 17 00:21:15.866326 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 17 00:21:15.866388 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] May 17 00:21:15.866451 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] May 17 00:21:15.866525 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 May 17 00:21:15.866591 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] May 17 00:21:15.866761 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 17 00:21:15.866839 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] May 17 00:21:15.866903 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] May 17 00:21:15.866976 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 May 17 00:21:15.867042 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] May 17 00:21:15.867108 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] May 17 00:21:15.867189 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 17 00:21:15.867250 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] May 17 00:21:15.867316 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] May 17 00:21:15.867388 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 May 17 00:21:15.867453 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] May 17 00:21:15.867516 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] May 17 00:21:15.867578 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 17 00:21:15.868681 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] May 17 00:21:15.868777 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] May 17 00:21:15.868786 kernel: acpiphp: Slot [0] registered May 17 00:21:15.868864 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 May 17 00:21:15.868930 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] May 17 00:21:15.868995 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] May 17 00:21:15.869060 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] May 17 00:21:15.869136 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 17 00:21:15.869201 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] May 17 00:21:15.869264 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] May 17 00:21:15.869276 kernel: acpiphp: Slot [0-2] registered May 17 00:21:15.869337 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 17 00:21:15.869399 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] May 17 00:21:15.869459 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] May 17 00:21:15.869467 kernel: acpiphp: Slot [0-3] registered May 17 00:21:15.869527 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 17 00:21:15.869587 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] May 17 00:21:15.870714 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] May 17 00:21:15.870731 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:21:15.870738 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:21:15.870744 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:21:15.870750 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:21:15.870756 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 17 00:21:15.870762 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 17 00:21:15.870767 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 17 00:21:15.870773 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 17 00:21:15.870778 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 17 00:21:15.870786 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 17 00:21:15.870792 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 17 00:21:15.870797 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 17 00:21:15.870803 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 17 00:21:15.870809 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 17 00:21:15.870814 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 17 00:21:15.870820 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 17 00:21:15.870825 kernel: iommu: Default domain type: Translated May 17 00:21:15.870831 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:21:15.870838 kernel: PCI: Using ACPI for IRQ routing May 17 00:21:15.870844 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:21:15.870849 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 17 00:21:15.870855 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] May 17 00:21:15.870926 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 17 00:21:15.871003 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 17 00:21:15.871127 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:21:15.871138 kernel: vgaarb: loaded May 17 00:21:15.871145 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 17 00:21:15.871154 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 17 00:21:15.871160 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:21:15.871166 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:21:15.871173 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:21:15.871178 kernel: pnp: PnP ACPI init May 17 00:21:15.871251 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 17 00:21:15.871261 kernel: pnp: PnP ACPI: found 5 devices May 17 00:21:15.871267 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:21:15.871275 kernel: NET: Registered PF_INET protocol family May 17 00:21:15.871281 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:21:15.871287 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 17 00:21:15.871293 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:21:15.871299 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:21:15.871305 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 17 00:21:15.871310 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 17 00:21:15.871316 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:21:15.871322 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:21:15.871330 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:21:15.871336 kernel: NET: Registered PF_XDP protocol family May 17 00:21:15.871397 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 17 00:21:15.871461 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 17 00:21:15.871523 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 17 00:21:15.871584 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] May 17 00:21:15.872680 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] May 17 00:21:15.872756 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] May 17 00:21:15.872819 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 17 00:21:15.872880 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] May 17 00:21:15.872941 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] May 17 00:21:15.873002 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 17 00:21:15.873063 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] May 17 00:21:15.873138 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] May 17 00:21:15.873201 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 17 00:21:15.873267 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] May 17 00:21:15.873354 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] May 17 00:21:15.873432 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 17 00:21:15.873496 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] May 17 00:21:15.873557 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] May 17 00:21:15.873619 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 17 00:21:15.874738 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] May 17 00:21:15.874807 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] May 17 00:21:15.874879 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 17 00:21:15.874943 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] May 17 00:21:15.875004 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] May 17 00:21:15.875064 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 17 00:21:15.875139 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] May 17 00:21:15.875205 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] May 17 00:21:15.875269 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] May 17 00:21:15.875331 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 17 00:21:15.875391 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] May 17 00:21:15.875468 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] May 17 00:21:15.875541 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] May 17 00:21:15.875620 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 17 00:21:15.876737 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] May 17 00:21:15.876809 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] May 17 00:21:15.876894 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] May 17 00:21:15.876955 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:21:15.877012 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:21:15.877066 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:21:15.877135 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] May 17 00:21:15.877193 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 17 00:21:15.877252 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 17 00:21:15.877362 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] May 17 00:21:15.877444 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] May 17 00:21:15.877512 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] May 17 00:21:15.877570 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] May 17 00:21:15.879667 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] May 17 00:21:15.879744 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] May 17 00:21:15.879822 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] May 17 00:21:15.879881 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] May 17 00:21:15.879944 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] May 17 00:21:15.880001 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] May 17 00:21:15.880063 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] May 17 00:21:15.880138 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] May 17 00:21:15.880203 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] May 17 00:21:15.880259 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] May 17 00:21:15.880314 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] May 17 00:21:15.880385 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] May 17 00:21:15.880454 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] May 17 00:21:15.880511 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] May 17 00:21:15.880699 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] May 17 00:21:15.880916 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] May 17 00:21:15.881141 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] May 17 00:21:15.881176 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 17 00:21:15.881196 kernel: PCI: CLS 0 bytes, default 64 May 17 00:21:15.881217 kernel: Initialise system trusted keyrings May 17 00:21:15.881237 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 17 00:21:15.881254 kernel: Key type asymmetric registered May 17 00:21:15.881264 kernel: Asymmetric key parser 'x509' registered May 17 00:21:15.881270 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 17 00:21:15.881276 kernel: io scheduler mq-deadline registered May 17 00:21:15.881283 kernel: io scheduler kyber registered May 17 00:21:15.881289 kernel: io scheduler bfq registered May 17 00:21:15.881361 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 May 17 00:21:15.882774 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 May 17 00:21:15.882869 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 May 17 00:21:15.882941 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 May 17 00:21:15.883023 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 May 17 00:21:15.883089 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 May 17 00:21:15.883179 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 May 17 00:21:15.883248 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 May 17 00:21:15.883318 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 May 17 00:21:15.883383 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 May 17 00:21:15.883451 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 May 17 00:21:15.883517 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 May 17 00:21:15.883592 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 May 17 00:21:15.883680 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 May 17 00:21:15.883752 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 May 17 00:21:15.883818 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 May 17 00:21:15.883828 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 17 00:21:15.883894 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 May 17 00:21:15.883957 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 May 17 00:21:15.883966 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:21:15.883977 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 May 17 00:21:15.883983 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:21:15.883990 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:21:15.883996 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:21:15.884003 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:21:15.884009 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:21:15.884083 kernel: rtc_cmos 00:03: RTC can wake from S4 May 17 00:21:15.884094 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:21:15.884172 kernel: rtc_cmos 00:03: registered as rtc0 May 17 00:21:15.884234 kernel: rtc_cmos 00:03: setting system clock to 2025-05-17T00:21:15 UTC (1747441275) May 17 00:21:15.884293 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 17 00:21:15.884302 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 17 00:21:15.884309 kernel: NET: Registered PF_INET6 protocol family May 17 00:21:15.884319 kernel: Segment Routing with IPv6 May 17 00:21:15.884325 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:21:15.884331 kernel: NET: Registered PF_PACKET protocol family May 17 00:21:15.884338 kernel: Key type dns_resolver registered May 17 00:21:15.884346 kernel: IPI shorthand broadcast: enabled May 17 00:21:15.884353 kernel: sched_clock: Marking stable (1123007080, 134630922)->(1294955643, -37317641) May 17 00:21:15.884361 kernel: registered taskstats version 1 May 17 00:21:15.884367 kernel: Loading compiled-in X.509 certificates May 17 00:21:15.884374 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 85b8d1234ceca483cb3defc2030d93f7792663c9' May 17 00:21:15.884380 kernel: Key type .fscrypt registered May 17 00:21:15.884386 kernel: Key type fscrypt-provisioning registered May 17 00:21:15.884392 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:21:15.884400 kernel: ima: Allocated hash algorithm: sha1 May 17 00:21:15.884406 kernel: ima: No architecture policies found May 17 00:21:15.884412 kernel: clk: Disabling unused clocks May 17 00:21:15.884418 kernel: Freeing unused kernel image (initmem) memory: 42872K May 17 00:21:15.884424 kernel: Write protecting the kernel read-only data: 36864k May 17 00:21:15.884430 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 17 00:21:15.884436 kernel: Run /init as init process May 17 00:21:15.884442 kernel: with arguments: May 17 00:21:15.884449 kernel: /init May 17 00:21:15.884454 kernel: with environment: May 17 00:21:15.884462 kernel: HOME=/ May 17 00:21:15.884468 kernel: TERM=linux May 17 00:21:15.884474 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:21:15.884482 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:21:15.884491 systemd[1]: Detected virtualization kvm. May 17 00:21:15.884498 systemd[1]: Detected architecture x86-64. May 17 00:21:15.884505 systemd[1]: Running in initrd. May 17 00:21:15.884512 systemd[1]: No hostname configured, using default hostname. May 17 00:21:15.884518 systemd[1]: Hostname set to . May 17 00:21:15.884525 systemd[1]: Initializing machine ID from VM UUID. May 17 00:21:15.884532 systemd[1]: Queued start job for default target initrd.target. May 17 00:21:15.884538 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:21:15.884545 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:21:15.884552 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:21:15.884558 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:21:15.884567 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:21:15.884573 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:21:15.884581 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:21:15.884588 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:21:15.884594 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:21:15.884601 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:21:15.884608 systemd[1]: Reached target paths.target - Path Units. May 17 00:21:15.884616 systemd[1]: Reached target slices.target - Slice Units. May 17 00:21:15.884622 systemd[1]: Reached target swap.target - Swaps. May 17 00:21:15.886660 systemd[1]: Reached target timers.target - Timer Units. May 17 00:21:15.886673 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:21:15.886680 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:21:15.886687 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:21:15.886694 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:21:15.886701 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:21:15.886707 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:21:15.886718 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:21:15.886724 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:21:15.886731 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:21:15.886738 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:21:15.886744 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:21:15.886751 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:21:15.886758 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:21:15.886764 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:21:15.886793 systemd-journald[187]: Collecting audit messages is disabled. May 17 00:21:15.886813 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:21:15.886820 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:21:15.886827 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:21:15.886837 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:21:15.886843 kernel: Bridge firewalling registered May 17 00:21:15.886851 systemd-journald[187]: Journal started May 17 00:21:15.886869 systemd-journald[187]: Runtime Journal (/run/log/journal/95923ccb626f45ca8e99c061b83ab568) is 4.8M, max 38.4M, 33.6M free. May 17 00:21:15.855576 systemd-modules-load[188]: Inserted module 'overlay' May 17 00:21:15.918504 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:21:15.886276 systemd-modules-load[188]: Inserted module 'br_netfilter' May 17 00:21:15.919066 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:21:15.920009 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:21:15.921052 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:21:15.927771 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:21:15.929363 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:21:15.932773 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:21:15.936306 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:21:15.944911 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:21:15.949733 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:21:15.950448 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:21:15.951091 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:21:15.958736 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:21:15.960583 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:21:15.963230 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:21:15.970902 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:21:15.973438 dracut-cmdline[217]: dracut-dracut-053 May 17 00:21:15.976499 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:21:15.988929 systemd-resolved[220]: Positive Trust Anchors: May 17 00:21:15.988943 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:21:15.988968 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:21:15.991795 systemd-resolved[220]: Defaulting to hostname 'linux'. May 17 00:21:15.992503 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:21:15.997896 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:21:16.040673 kernel: SCSI subsystem initialized May 17 00:21:16.047673 kernel: Loading iSCSI transport class v2.0-870. May 17 00:21:16.056662 kernel: iscsi: registered transport (tcp) May 17 00:21:16.073652 kernel: iscsi: registered transport (qla4xxx) May 17 00:21:16.073689 kernel: QLogic iSCSI HBA Driver May 17 00:21:16.106005 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:21:16.112739 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:21:16.131776 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:21:16.131829 kernel: device-mapper: uevent: version 1.0.3 May 17 00:21:16.132953 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:21:16.170662 kernel: raid6: avx2x4 gen() 33834 MB/s May 17 00:21:16.187665 kernel: raid6: avx2x2 gen() 32496 MB/s May 17 00:21:16.204895 kernel: raid6: avx2x1 gen() 25908 MB/s May 17 00:21:16.204929 kernel: raid6: using algorithm avx2x4 gen() 33834 MB/s May 17 00:21:16.223861 kernel: raid6: .... xor() 4288 MB/s, rmw enabled May 17 00:21:16.223885 kernel: raid6: using avx2x2 recovery algorithm May 17 00:21:16.240665 kernel: xor: automatically using best checksumming function avx May 17 00:21:16.377670 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:21:16.387025 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:21:16.391757 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:21:16.405372 systemd-udevd[404]: Using default interface naming scheme 'v255'. May 17 00:21:16.408833 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:21:16.416799 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:21:16.426880 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation May 17 00:21:16.449530 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:21:16.453740 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:21:16.502451 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:21:16.508818 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:21:16.523066 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:21:16.525132 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:21:16.526737 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:21:16.528145 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:21:16.534843 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:21:16.549103 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:21:16.569713 kernel: scsi host0: Virtio SCSI HBA May 17 00:21:16.575643 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:21:16.586395 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 17 00:21:16.592596 kernel: ACPI: bus type USB registered May 17 00:21:16.593005 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:21:16.593819 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:21:16.595482 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:21:16.599565 kernel: usbcore: registered new interface driver usbfs May 17 00:21:16.598047 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:21:16.598166 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:21:16.598645 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:21:16.607732 kernel: usbcore: registered new interface driver hub May 17 00:21:16.607906 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:21:16.654675 kernel: libata version 3.00 loaded. May 17 00:21:16.654752 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:21:16.655750 kernel: AES CTR mode by8 optimization enabled May 17 00:21:16.668655 kernel: usbcore: registered new device driver usb May 17 00:21:16.684951 kernel: ahci 0000:00:1f.2: version 3.0 May 17 00:21:16.685157 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 17 00:21:16.685170 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 17 00:21:16.685255 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 17 00:21:16.688650 kernel: scsi host1: ahci May 17 00:21:16.688774 kernel: sd 0:0:0:0: Power-on or device reset occurred May 17 00:21:16.688922 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) May 17 00:21:16.689017 kernel: sd 0:0:0:0: [sda] Write Protect is off May 17 00:21:16.689137 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 May 17 00:21:16.689229 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 00:21:16.689308 kernel: scsi host2: ahci May 17 00:21:16.691891 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:21:16.691916 kernel: GPT:17805311 != 80003071 May 17 00:21:16.691925 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:21:16.691933 kernel: GPT:17805311 != 80003071 May 17 00:21:16.691940 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:21:16.691947 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:21:16.691954 kernel: scsi host3: ahci May 17 00:21:16.692056 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 17 00:21:16.695865 kernel: scsi host4: ahci May 17 00:21:16.695982 kernel: scsi host5: ahci May 17 00:21:16.696065 kernel: scsi host6: ahci May 17 00:21:16.696160 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 May 17 00:21:16.696169 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 May 17 00:21:16.696176 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 May 17 00:21:16.696183 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 May 17 00:21:16.696190 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 May 17 00:21:16.696200 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 May 17 00:21:16.748066 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 17 00:21:16.753378 kernel: BTRFS: device fsid 7f88d479-6686-439c-8052-b96f0a9d77bc devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (451) May 17 00:21:16.753399 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (456) May 17 00:21:16.754177 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 17 00:21:16.755748 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:21:16.762156 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:21:16.770060 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 17 00:21:16.774325 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 17 00:21:16.775622 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 17 00:21:16.777139 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:21:16.782766 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:21:16.787294 disk-uuid[574]: Primary Header is updated. May 17 00:21:16.787294 disk-uuid[574]: Secondary Entries is updated. May 17 00:21:16.787294 disk-uuid[574]: Secondary Header is updated. May 17 00:21:16.800673 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:21:16.808663 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:21:16.817662 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:21:17.007581 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 17 00:21:17.007684 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 17 00:21:17.007699 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 17 00:21:17.007721 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 17 00:21:17.007731 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 17 00:21:17.009660 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 17 00:21:17.010730 kernel: ata1.00: applying bridge limits May 17 00:21:17.012951 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 17 00:21:17.012977 kernel: ata1.00: configured for UDMA/100 May 17 00:21:17.014771 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 17 00:21:17.048188 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 17 00:21:17.051742 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 May 17 00:21:17.051891 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 May 17 00:21:17.055165 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 17 00:21:17.055313 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 May 17 00:21:17.058819 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed May 17 00:21:17.060375 kernel: hub 1-0:1.0: USB hub found May 17 00:21:17.060523 kernel: hub 1-0:1.0: 4 ports detected May 17 00:21:17.062997 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 17 00:21:17.064919 kernel: hub 2-0:1.0: USB hub found May 17 00:21:17.065964 kernel: hub 2-0:1.0: 4 ports detected May 17 00:21:17.069272 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 17 00:21:17.069401 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 00:21:17.083657 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 May 17 00:21:17.303715 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd May 17 00:21:17.440669 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 00:21:17.445733 kernel: usbcore: registered new interface driver usbhid May 17 00:21:17.445806 kernel: usbhid: USB HID core driver May 17 00:21:17.454816 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 May 17 00:21:17.454876 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 May 17 00:21:17.817710 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:21:17.817786 disk-uuid[575]: The operation has completed successfully. May 17 00:21:17.872599 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:21:17.872708 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:21:17.880736 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:21:17.884078 sh[598]: Success May 17 00:21:17.894655 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 17 00:21:17.943928 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:21:17.953420 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:21:17.954695 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:21:17.971392 kernel: BTRFS info (device dm-0): first mount of filesystem 7f88d479-6686-439c-8052-b96f0a9d77bc May 17 00:21:17.971445 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 17 00:21:17.971462 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:21:17.974311 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:21:17.974342 kernel: BTRFS info (device dm-0): using free space tree May 17 00:21:17.982664 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 00:21:17.985129 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:21:17.986343 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:21:17.992815 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:21:17.995952 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:21:18.006664 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:21:18.009361 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:21:18.009403 kernel: BTRFS info (device sda6): using free space tree May 17 00:21:18.015245 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:21:18.015282 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:21:18.025669 kernel: BTRFS info (device sda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:21:18.026019 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:21:18.033324 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:21:18.042042 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:21:18.092689 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:21:18.101279 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:21:18.114470 ignition[714]: Ignition 2.19.0 May 17 00:21:18.114480 ignition[714]: Stage: fetch-offline May 17 00:21:18.114511 ignition[714]: no configs at "/usr/lib/ignition/base.d" May 17 00:21:18.114518 ignition[714]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:21:18.116677 ignition[714]: parsed url from cmdline: "" May 17 00:21:18.118197 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:21:18.116680 ignition[714]: no config URL provided May 17 00:21:18.116685 ignition[714]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:21:18.116692 ignition[714]: no config at "/usr/lib/ignition/user.ign" May 17 00:21:18.116697 ignition[714]: failed to fetch config: resource requires networking May 17 00:21:18.116841 ignition[714]: Ignition finished successfully May 17 00:21:18.123562 systemd-networkd[783]: lo: Link UP May 17 00:21:18.123574 systemd-networkd[783]: lo: Gained carrier May 17 00:21:18.125595 systemd-networkd[783]: Enumeration completed May 17 00:21:18.125699 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:21:18.126218 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:21:18.126223 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:21:18.127543 systemd[1]: Reached target network.target - Network. May 17 00:21:18.127903 systemd-networkd[783]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:21:18.127907 systemd-networkd[783]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:21:18.130097 systemd-networkd[783]: eth0: Link UP May 17 00:21:18.130102 systemd-networkd[783]: eth0: Gained carrier May 17 00:21:18.130110 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:21:18.134802 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 17 00:21:18.135345 systemd-networkd[783]: eth1: Link UP May 17 00:21:18.135348 systemd-networkd[783]: eth1: Gained carrier May 17 00:21:18.135355 systemd-networkd[783]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:21:18.147177 ignition[787]: Ignition 2.19.0 May 17 00:21:18.147755 ignition[787]: Stage: fetch May 17 00:21:18.147940 ignition[787]: no configs at "/usr/lib/ignition/base.d" May 17 00:21:18.147949 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:21:18.148022 ignition[787]: parsed url from cmdline: "" May 17 00:21:18.148025 ignition[787]: no config URL provided May 17 00:21:18.148029 ignition[787]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:21:18.148034 ignition[787]: no config at "/usr/lib/ignition/user.ign" May 17 00:21:18.148053 ignition[787]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 May 17 00:21:18.148196 ignition[787]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable May 17 00:21:18.157688 systemd-networkd[783]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:21:18.190678 systemd-networkd[783]: eth0: DHCPv4 address 37.27.204.183/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 17 00:21:18.348810 ignition[787]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 May 17 00:21:18.356742 ignition[787]: GET result: OK May 17 00:21:18.356885 ignition[787]: parsing config with SHA512: 179cf571ddfd007b79f29b4d47e83cb4a3834095b01aa3322409cbae0474bd868176a9d05a718282c12f614e479b6d90739192df4b1caee331cd1b186e21ed45 May 17 00:21:18.361336 unknown[787]: fetched base config from "system" May 17 00:21:18.362268 ignition[787]: fetch: fetch complete May 17 00:21:18.361382 unknown[787]: fetched base config from "system" May 17 00:21:18.362276 ignition[787]: fetch: fetch passed May 17 00:21:18.361393 unknown[787]: fetched user config from "hetzner" May 17 00:21:18.362351 ignition[787]: Ignition finished successfully May 17 00:21:18.365339 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 17 00:21:18.371854 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:21:18.389316 ignition[796]: Ignition 2.19.0 May 17 00:21:18.389337 ignition[796]: Stage: kargs May 17 00:21:18.389608 ignition[796]: no configs at "/usr/lib/ignition/base.d" May 17 00:21:18.389625 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:21:18.391213 ignition[796]: kargs: kargs passed May 17 00:21:18.391321 ignition[796]: Ignition finished successfully May 17 00:21:18.394223 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:21:18.401840 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:21:18.415062 ignition[802]: Ignition 2.19.0 May 17 00:21:18.415086 ignition[802]: Stage: disks May 17 00:21:18.415367 ignition[802]: no configs at "/usr/lib/ignition/base.d" May 17 00:21:18.415385 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:21:18.416920 ignition[802]: disks: disks passed May 17 00:21:18.418355 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:21:18.417017 ignition[802]: Ignition finished successfully May 17 00:21:18.420148 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:21:18.421586 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:21:18.423068 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:21:18.424573 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:21:18.426438 systemd[1]: Reached target basic.target - Basic System. May 17 00:21:18.432829 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:21:18.450029 systemd-fsck[810]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 17 00:21:18.453919 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:21:18.459837 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:21:18.535663 kernel: EXT4-fs (sda9): mounted filesystem 278698a4-82b6-49b4-b6df-f7999ed4e35e r/w with ordered data mode. Quota mode: none. May 17 00:21:18.536018 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:21:18.536897 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:21:18.542687 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:21:18.545717 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:21:18.546861 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 17 00:21:18.547945 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:21:18.547969 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:21:18.559023 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (818) May 17 00:21:18.559061 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:21:18.562799 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:21:18.565895 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:21:18.565915 kernel: BTRFS info (device sda6): using free space tree May 17 00:21:18.566183 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:21:18.571676 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:21:18.571700 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:21:18.577554 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:21:18.616852 initrd-setup-root[846]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:21:18.617853 coreos-metadata[820]: May 17 00:21:18.616 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 May 17 00:21:18.619720 coreos-metadata[820]: May 17 00:21:18.618 INFO Fetch successful May 17 00:21:18.619720 coreos-metadata[820]: May 17 00:21:18.619 INFO wrote hostname ci-4081-3-3-n-82e895e080 to /sysroot/etc/hostname May 17 00:21:18.621282 initrd-setup-root[853]: cut: /sysroot/etc/group: No such file or directory May 17 00:21:18.623706 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:21:18.625248 initrd-setup-root[860]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:21:18.628171 initrd-setup-root[868]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:21:18.700724 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:21:18.705720 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:21:18.709082 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:21:18.713643 kernel: BTRFS info (device sda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:21:18.728588 ignition[935]: INFO : Ignition 2.19.0 May 17 00:21:18.730261 ignition[935]: INFO : Stage: mount May 17 00:21:18.730261 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:21:18.730261 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:21:18.733488 ignition[935]: INFO : mount: mount passed May 17 00:21:18.733488 ignition[935]: INFO : Ignition finished successfully May 17 00:21:18.733425 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:21:18.734182 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:21:18.749732 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:21:18.969418 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:21:18.976815 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:21:18.988668 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (947) May 17 00:21:18.988717 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:21:18.992313 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:21:18.997598 kernel: BTRFS info (device sda6): using free space tree May 17 00:21:19.005911 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:21:19.005958 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:21:19.012691 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:21:19.044943 ignition[963]: INFO : Ignition 2.19.0 May 17 00:21:19.046254 ignition[963]: INFO : Stage: files May 17 00:21:19.047730 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:21:19.047730 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:21:19.050455 ignition[963]: DEBUG : files: compiled without relabeling support, skipping May 17 00:21:19.051865 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:21:19.051865 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:21:19.055556 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:21:19.057251 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:21:19.059110 unknown[963]: wrote ssh authorized keys file for user: core May 17 00:21:19.060694 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:21:19.062091 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 17 00:21:19.062091 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 17 00:21:19.343466 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:21:19.579794 systemd-networkd[783]: eth1: Gained IPv6LL May 17 00:21:19.771837 systemd-networkd[783]: eth0: Gained IPv6LL May 17 00:21:21.111511 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 17 00:21:21.111511 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 17 00:21:21.114471 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:21:21.114471 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:21:21.114471 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:21:21.114471 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:21:21.114471 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:21:21.114471 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:21:21.114471 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:21:21.114471 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:21:21.114471 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:21:21.114471 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:21:21.114471 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:21:21.114471 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:21:21.114471 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 17 00:21:21.877203 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 17 00:21:22.019349 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:21:22.019349 ignition[963]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 17 00:21:22.021088 ignition[963]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:21:22.021088 ignition[963]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:21:22.021088 ignition[963]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 17 00:21:22.021088 ignition[963]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 17 00:21:22.021088 ignition[963]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 17 00:21:22.021088 ignition[963]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 17 00:21:22.021088 ignition[963]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 17 00:21:22.021088 ignition[963]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" May 17 00:21:22.021088 ignition[963]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:21:22.029879 ignition[963]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:21:22.029879 ignition[963]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:21:22.029879 ignition[963]: INFO : files: files passed May 17 00:21:22.029879 ignition[963]: INFO : Ignition finished successfully May 17 00:21:22.024682 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:21:22.034798 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:21:22.037891 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:21:22.039715 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:21:22.040425 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:21:22.048148 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:21:22.049414 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:21:22.050193 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:21:22.050415 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:21:22.052138 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:21:22.058869 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:21:22.087196 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:21:22.087288 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:21:22.088348 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:21:22.088949 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:21:22.090133 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:21:22.099835 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:21:22.110706 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:21:22.115786 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:21:22.127732 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:21:22.129089 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:21:22.130330 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:21:22.130924 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:21:22.131040 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:21:22.132378 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:21:22.133159 systemd[1]: Stopped target basic.target - Basic System. May 17 00:21:22.134411 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:21:22.135299 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:21:22.136278 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:21:22.137542 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:21:22.138750 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:21:22.139930 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:21:22.140966 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:21:22.142098 systemd[1]: Stopped target swap.target - Swaps. May 17 00:21:22.143128 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:21:22.143235 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:21:22.144356 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:21:22.144995 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:21:22.145903 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:21:22.147851 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:21:22.148751 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:21:22.148888 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:21:22.150309 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:21:22.150441 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:21:22.151686 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:21:22.151809 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:21:22.152782 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:21:22.152918 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:21:22.160205 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:21:22.160769 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:21:22.160938 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:21:22.163871 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:21:22.164346 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:21:22.164489 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:21:22.165100 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:21:22.165241 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:21:22.170276 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:21:22.170370 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:21:22.182803 ignition[1017]: INFO : Ignition 2.19.0 May 17 00:21:22.183547 ignition[1017]: INFO : Stage: umount May 17 00:21:22.183824 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:21:22.187148 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:21:22.189404 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:21:22.189404 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:21:22.187232 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:21:22.193895 ignition[1017]: INFO : umount: umount passed May 17 00:21:22.193895 ignition[1017]: INFO : Ignition finished successfully May 17 00:21:22.191299 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:21:22.191389 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:21:22.193362 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:21:22.193407 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:21:22.194386 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:21:22.194424 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:21:22.194968 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:21:22.195003 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 17 00:21:22.195925 systemd[1]: Stopped target network.target - Network. May 17 00:21:22.196805 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:21:22.196845 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:21:22.197824 systemd[1]: Stopped target paths.target - Path Units. May 17 00:21:22.198659 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:21:22.198703 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:21:22.199616 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:21:22.200505 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:21:22.201495 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:21:22.201528 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:21:22.202607 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:21:22.202686 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:21:22.203577 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:21:22.203675 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:21:22.204577 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:21:22.204613 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:21:22.205664 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:21:22.205699 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:21:22.206881 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:21:22.207849 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:21:22.211717 systemd-networkd[783]: eth0: DHCPv6 lease lost May 17 00:21:22.215536 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:21:22.215721 systemd-networkd[783]: eth1: DHCPv6 lease lost May 17 00:21:22.216525 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:21:22.218558 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:21:22.218665 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:21:22.220778 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:21:22.220820 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:21:22.227726 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:21:22.229184 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:21:22.229229 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:21:22.230233 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:21:22.230268 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:21:22.231233 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:21:22.231267 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:21:22.232376 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:21:22.232409 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:21:22.233785 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:21:22.244889 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:21:22.245007 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:21:22.246326 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:21:22.246508 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:21:22.248452 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:21:22.248509 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:21:22.249748 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:21:22.249784 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:21:22.250827 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:21:22.250880 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:21:22.252337 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:21:22.252391 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:21:22.253316 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:21:22.253352 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:21:22.260975 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:21:22.261463 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:21:22.261508 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:21:22.262019 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:21:22.262053 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:21:22.266464 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:21:22.266572 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:21:22.268135 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:21:22.274808 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:21:22.281863 systemd[1]: Switching root. May 17 00:21:22.324991 systemd-journald[187]: Journal stopped May 17 00:21:23.082855 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). May 17 00:21:23.082912 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:21:23.082927 kernel: SELinux: policy capability open_perms=1 May 17 00:21:23.082936 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:21:23.082947 kernel: SELinux: policy capability always_check_network=0 May 17 00:21:23.082954 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:21:23.082968 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:21:23.082975 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:21:23.082984 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:21:23.082992 kernel: audit: type=1403 audit(1747441282.444:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:21:23.083004 systemd[1]: Successfully loaded SELinux policy in 46.707ms. May 17 00:21:23.083017 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.471ms. May 17 00:21:23.083026 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:21:23.083034 systemd[1]: Detected virtualization kvm. May 17 00:21:23.083043 systemd[1]: Detected architecture x86-64. May 17 00:21:23.083051 systemd[1]: Detected first boot. May 17 00:21:23.083060 systemd[1]: Hostname set to . May 17 00:21:23.083068 systemd[1]: Initializing machine ID from VM UUID. May 17 00:21:23.083078 zram_generator::config[1059]: No configuration found. May 17 00:21:23.083087 systemd[1]: Populated /etc with preset unit settings. May 17 00:21:23.083096 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:21:23.083104 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 17 00:21:23.083125 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:21:23.083135 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 00:21:23.083144 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 00:21:23.083152 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 00:21:23.083163 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 00:21:23.083171 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 00:21:23.083182 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 00:21:23.083190 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 00:21:23.083198 systemd[1]: Created slice user.slice - User and Session Slice. May 17 00:21:23.083207 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:21:23.083216 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:21:23.083224 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 00:21:23.083232 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 00:21:23.083245 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 00:21:23.083254 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:21:23.083262 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 17 00:21:23.083271 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:21:23.083279 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 17 00:21:23.083288 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 17 00:21:23.083297 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 17 00:21:23.083307 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 00:21:23.083315 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:21:23.083324 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:21:23.083332 systemd[1]: Reached target slices.target - Slice Units. May 17 00:21:23.083340 systemd[1]: Reached target swap.target - Swaps. May 17 00:21:23.083348 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 00:21:23.083357 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 00:21:23.083365 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:21:23.083374 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:21:23.083383 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:21:23.083392 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 00:21:23.083401 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 00:21:23.083409 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 00:21:23.083416 systemd[1]: Mounting media.mount - External Media Directory... May 17 00:21:23.083424 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:21:23.083434 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 00:21:23.083442 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 00:21:23.083452 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 00:21:23.083462 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:21:23.083471 systemd[1]: Reached target machines.target - Containers. May 17 00:21:23.083479 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 00:21:23.083487 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:21:23.083496 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:21:23.083505 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 00:21:23.083513 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:21:23.083521 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:21:23.083529 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:21:23.083537 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 00:21:23.083545 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:21:23.083553 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:21:23.083561 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:21:23.083571 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 17 00:21:23.083579 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:21:23.083587 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:21:23.083595 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:21:23.083603 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:21:23.083611 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 00:21:23.083619 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 00:21:23.084792 systemd-journald[1142]: Collecting audit messages is disabled. May 17 00:21:23.084827 systemd-journald[1142]: Journal started May 17 00:21:23.084850 systemd-journald[1142]: Runtime Journal (/run/log/journal/95923ccb626f45ca8e99c061b83ab568) is 4.8M, max 38.4M, 33.6M free. May 17 00:21:23.084880 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:21:22.865803 systemd[1]: Queued start job for default target multi-user.target. May 17 00:21:22.888925 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 17 00:21:22.889303 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:21:23.095621 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:21:23.095688 systemd[1]: Stopped verity-setup.service. May 17 00:21:23.102711 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:21:23.110647 kernel: loop: module loaded May 17 00:21:23.113661 kernel: ACPI: bus type drm_connector registered May 17 00:21:23.113707 kernel: fuse: init (API version 7.39) May 17 00:21:23.120127 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:21:23.117023 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 00:21:23.121533 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 00:21:23.122224 systemd[1]: Mounted media.mount - External Media Directory. May 17 00:21:23.122876 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 00:21:23.124771 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 00:21:23.127095 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 00:21:23.127785 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 00:21:23.128480 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:21:23.129193 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:21:23.129304 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 00:21:23.130252 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:21:23.130357 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:21:23.131381 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:21:23.131548 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:21:23.132235 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:21:23.132393 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:21:23.133181 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:21:23.133335 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 00:21:23.134165 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:21:23.134317 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:21:23.135009 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:21:23.135864 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 00:21:23.136657 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 00:21:23.143247 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 00:21:23.150352 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 00:21:23.155087 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 00:21:23.155588 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:21:23.155649 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:21:23.156906 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 00:21:23.163097 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 00:21:23.167761 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 00:21:23.168403 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:21:23.174781 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 00:21:23.179457 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 00:21:23.180013 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:21:23.183504 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 00:21:23.184068 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:21:23.188744 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:21:23.191067 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 00:21:23.197755 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 00:21:23.202143 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 00:21:23.205824 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 00:21:23.206481 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 00:21:23.207213 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 00:21:23.209594 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 00:21:23.220050 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 00:21:23.220827 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:21:23.226477 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 00:21:23.227778 systemd-journald[1142]: Time spent on flushing to /var/log/journal/95923ccb626f45ca8e99c061b83ab568 is 57.530ms for 1137 entries. May 17 00:21:23.227778 systemd-journald[1142]: System Journal (/var/log/journal/95923ccb626f45ca8e99c061b83ab568) is 8.0M, max 584.8M, 576.8M free. May 17 00:21:23.313103 systemd-journald[1142]: Received client request to flush runtime journal. May 17 00:21:23.313165 kernel: loop0: detected capacity change from 0 to 140768 May 17 00:21:23.313184 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:21:23.262067 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:21:23.283474 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:21:23.293464 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:21:23.294861 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 00:21:23.298923 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 00:21:23.306933 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:21:23.316535 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 00:21:23.329698 kernel: loop1: detected capacity change from 0 to 224512 May 17 00:21:23.329473 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. May 17 00:21:23.329483 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. May 17 00:21:23.336127 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:21:23.382670 kernel: loop2: detected capacity change from 0 to 142488 May 17 00:21:23.428660 kernel: loop3: detected capacity change from 0 to 8 May 17 00:21:23.446661 kernel: loop4: detected capacity change from 0 to 140768 May 17 00:21:23.466127 kernel: loop5: detected capacity change from 0 to 224512 May 17 00:21:23.489666 kernel: loop6: detected capacity change from 0 to 142488 May 17 00:21:23.509777 kernel: loop7: detected capacity change from 0 to 8 May 17 00:21:23.509958 (sd-merge)[1204]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. May 17 00:21:23.510427 (sd-merge)[1204]: Merged extensions into '/usr'. May 17 00:21:23.516848 systemd[1]: Reloading requested from client PID 1179 ('systemd-sysext') (unit systemd-sysext.service)... May 17 00:21:23.517040 systemd[1]: Reloading... May 17 00:21:23.583693 zram_generator::config[1228]: No configuration found. May 17 00:21:23.642254 ldconfig[1174]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:21:23.714486 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:21:23.755434 systemd[1]: Reloading finished in 237 ms. May 17 00:21:23.777397 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 00:21:23.778435 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 00:21:23.788542 systemd[1]: Starting ensure-sysext.service... May 17 00:21:23.791893 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:21:23.806685 systemd[1]: Reloading requested from client PID 1273 ('systemctl') (unit ensure-sysext.service)... May 17 00:21:23.806705 systemd[1]: Reloading... May 17 00:21:23.824839 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:21:23.825471 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 00:21:23.826233 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:21:23.826594 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. May 17 00:21:23.826741 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. May 17 00:21:23.829564 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:21:23.829692 systemd-tmpfiles[1274]: Skipping /boot May 17 00:21:23.836028 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:21:23.836138 systemd-tmpfiles[1274]: Skipping /boot May 17 00:21:23.872526 zram_generator::config[1303]: No configuration found. May 17 00:21:23.965244 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:21:24.016344 systemd[1]: Reloading finished in 209 ms. May 17 00:21:24.032123 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 00:21:24.037161 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:21:24.045776 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:21:24.048493 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 00:21:24.051813 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 00:21:24.057942 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:21:24.064795 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:21:24.066778 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 00:21:24.075606 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 00:21:24.078152 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:21:24.078299 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:21:24.079854 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:21:24.084846 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:21:24.087058 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:21:24.088409 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:21:24.088512 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:21:24.090924 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:21:24.091080 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:21:24.091210 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:21:24.091275 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:21:24.097945 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:21:24.098184 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:21:24.105383 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:21:24.107072 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:21:24.107281 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:21:24.109310 systemd[1]: Finished ensure-sysext.service. May 17 00:21:24.112624 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 00:21:24.113473 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:21:24.113576 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:21:24.119972 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 00:21:24.131947 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 17 00:21:24.135804 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 00:21:24.138654 systemd-udevd[1351]: Using default interface naming scheme 'v255'. May 17 00:21:24.144430 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:21:24.144566 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:21:24.146260 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:21:24.147990 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:21:24.148366 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:21:24.152461 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:21:24.154050 augenrules[1378]: No rules May 17 00:21:24.158444 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:21:24.158588 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:21:24.159442 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 00:21:24.160599 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:21:24.171988 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 00:21:24.184751 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:21:24.195541 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:21:24.196384 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 00:21:24.198005 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:21:24.256687 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 17 00:21:24.257900 systemd[1]: Reached target time-set.target - System Time Set. May 17 00:21:24.284967 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 17 00:21:24.286036 systemd-resolved[1349]: Positive Trust Anchors: May 17 00:21:24.286050 systemd-resolved[1349]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:21:24.286075 systemd-resolved[1349]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:21:24.296416 systemd-resolved[1349]: Using system hostname 'ci-4081-3-3-n-82e895e080'. May 17 00:21:24.297602 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:21:24.298801 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:21:24.306592 systemd-networkd[1393]: lo: Link UP May 17 00:21:24.306608 systemd-networkd[1393]: lo: Gained carrier May 17 00:21:24.309310 systemd-networkd[1393]: Enumeration completed May 17 00:21:24.309429 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:21:24.310734 systemd[1]: Reached target network.target - Network. May 17 00:21:24.312685 systemd-networkd[1393]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:21:24.312691 systemd-networkd[1393]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:21:24.313885 systemd-networkd[1393]: eth1: Link UP May 17 00:21:24.313888 systemd-networkd[1393]: eth1: Gained carrier May 17 00:21:24.313904 systemd-networkd[1393]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:21:24.318853 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 00:21:24.341699 systemd-networkd[1393]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:21:24.343540 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. May 17 00:21:24.368916 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 May 17 00:21:24.372650 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:21:24.379160 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:21:24.379171 systemd-networkd[1393]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:21:24.379716 systemd-networkd[1393]: eth0: Link UP May 17 00:21:24.379722 systemd-networkd[1393]: eth0: Gained carrier May 17 00:21:24.379741 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:21:24.380587 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. May 17 00:21:24.381647 kernel: ACPI: button: Power Button [PWRF] May 17 00:21:24.383911 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. May 17 00:21:24.390704 systemd-networkd[1393]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:21:24.415303 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. May 17 00:21:24.415603 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:21:24.415850 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:21:24.419760 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:21:24.424659 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1406) May 17 00:21:24.428851 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:21:24.431765 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:21:24.432563 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:21:24.432597 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:21:24.432610 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:21:24.438714 systemd-networkd[1393]: eth0: DHCPv4 address 37.27.204.183/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 17 00:21:24.440284 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. May 17 00:21:24.441944 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. May 17 00:21:24.451055 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:21:24.451245 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:21:24.453873 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:21:24.454020 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:21:24.456772 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:21:24.456935 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:21:24.460805 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:21:24.460872 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:21:24.466691 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 17 00:21:24.466952 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 17 00:21:24.470049 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 17 00:21:24.482874 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 May 17 00:21:24.490807 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 May 17 00:21:24.490899 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console May 17 00:21:24.497832 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:21:24.502179 kernel: Console: switching to colour dummy device 80x25 May 17 00:21:24.503675 kernel: EDAC MC: Ver: 3.0.0 May 17 00:21:24.510274 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 17 00:21:24.510353 kernel: [drm] features: -context_init May 17 00:21:24.526657 kernel: [drm] number of scanouts: 1 May 17 00:21:24.529680 kernel: [drm] number of cap sets: 0 May 17 00:21:24.539950 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 May 17 00:21:24.537835 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:21:24.537968 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:21:24.547606 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 17 00:21:24.547714 kernel: Console: switching to colour frame buffer device 160x50 May 17 00:21:24.555059 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:21:24.559820 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 17 00:21:24.569494 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 17 00:21:24.582995 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 00:21:24.587072 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:21:24.587291 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:21:24.594840 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:21:24.596863 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 00:21:24.640273 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:21:24.683778 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 00:21:24.691830 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 00:21:24.702901 lvm[1459]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:21:24.729452 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 00:21:24.731483 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:21:24.731586 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:21:24.731806 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 00:21:24.731940 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 00:21:24.732249 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 00:21:24.732432 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 00:21:24.732526 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 00:21:24.732604 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:21:24.732648 systemd[1]: Reached target paths.target - Path Units. May 17 00:21:24.732713 systemd[1]: Reached target timers.target - Timer Units. May 17 00:21:24.738048 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 00:21:24.739600 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 00:21:24.746489 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 00:21:24.747993 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 00:21:24.752722 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 00:21:24.753438 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:21:24.757666 lvm[1463]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:21:24.757668 systemd[1]: Reached target basic.target - Basic System. May 17 00:21:24.758287 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 00:21:24.758325 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 00:21:24.763765 systemd[1]: Starting containerd.service - containerd container runtime... May 17 00:21:24.772595 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 17 00:21:24.781046 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 00:21:24.784768 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 00:21:24.790931 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 00:21:24.791505 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 00:21:24.795734 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 00:21:24.799358 jq[1467]: false May 17 00:21:24.802882 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 00:21:24.811588 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. May 17 00:21:24.818189 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 00:21:24.829847 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 00:21:24.840560 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 00:21:24.846221 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:21:24.848175 dbus-daemon[1466]: [system] SELinux support is enabled May 17 00:21:24.848752 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:21:24.850477 coreos-metadata[1465]: May 17 00:21:24.850 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 May 17 00:21:24.853736 coreos-metadata[1465]: May 17 00:21:24.853 INFO Fetch successful May 17 00:21:24.857647 extend-filesystems[1470]: Found loop4 May 17 00:21:24.857647 extend-filesystems[1470]: Found loop5 May 17 00:21:24.857647 extend-filesystems[1470]: Found loop6 May 17 00:21:24.857647 extend-filesystems[1470]: Found loop7 May 17 00:21:24.857647 extend-filesystems[1470]: Found sda May 17 00:21:24.857647 extend-filesystems[1470]: Found sda1 May 17 00:21:24.857647 extend-filesystems[1470]: Found sda2 May 17 00:21:24.857647 extend-filesystems[1470]: Found sda3 May 17 00:21:24.857647 extend-filesystems[1470]: Found usr May 17 00:21:24.857647 extend-filesystems[1470]: Found sda4 May 17 00:21:24.857647 extend-filesystems[1470]: Found sda6 May 17 00:21:24.857647 extend-filesystems[1470]: Found sda7 May 17 00:21:24.857647 extend-filesystems[1470]: Found sda9 May 17 00:21:24.857647 extend-filesystems[1470]: Checking size of /dev/sda9 May 17 00:21:24.877521 coreos-metadata[1465]: May 17 00:21:24.855 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 May 17 00:21:24.877521 coreos-metadata[1465]: May 17 00:21:24.858 INFO Fetch successful May 17 00:21:24.855265 systemd[1]: Starting update-engine.service - Update Engine... May 17 00:21:24.870754 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 00:21:24.874424 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 00:21:24.884970 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 00:21:24.887103 extend-filesystems[1470]: Resized partition /dev/sda9 May 17 00:21:24.894435 extend-filesystems[1495]: resize2fs 1.47.1 (20-May-2024) May 17 00:21:24.930508 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks May 17 00:21:24.909534 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:21:24.910916 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 00:21:24.911366 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:21:24.911567 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 00:21:24.923175 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:21:24.923358 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 00:21:24.952043 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:21:24.952097 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 00:21:24.954954 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:21:24.954992 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 00:21:24.959581 jq[1488]: true May 17 00:21:24.970724 tar[1497]: linux-amd64/LICENSE May 17 00:21:24.971798 tar[1497]: linux-amd64/helm May 17 00:21:24.976807 (ntainerd)[1504]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 00:21:24.988675 update_engine[1484]: I20250517 00:21:24.986444 1484 main.cc:92] Flatcar Update Engine starting May 17 00:21:24.993051 systemd[1]: Started update-engine.service - Update Engine. May 17 00:21:24.997009 update_engine[1484]: I20250517 00:21:24.996850 1484 update_check_scheduler.cc:74] Next update check in 10m14s May 17 00:21:25.002782 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 00:21:25.019622 jq[1510]: true May 17 00:21:25.044296 kernel: EXT4-fs (sda9): resized filesystem to 9393147 May 17 00:21:25.055693 extend-filesystems[1495]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 17 00:21:25.055693 extend-filesystems[1495]: old_desc_blocks = 1, new_desc_blocks = 5 May 17 00:21:25.055693 extend-filesystems[1495]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. May 17 00:21:25.060646 extend-filesystems[1470]: Resized filesystem in /dev/sda9 May 17 00:21:25.060646 extend-filesystems[1470]: Found sr0 May 17 00:21:25.061353 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:21:25.061509 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 00:21:25.083602 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 17 00:21:25.086976 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1403) May 17 00:21:25.099431 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 17 00:21:25.107844 bash[1537]: Updated "/home/core/.ssh/authorized_keys" May 17 00:21:25.110848 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 00:21:25.114989 systemd-logind[1483]: New seat seat0. May 17 00:21:25.121893 sshd_keygen[1491]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:21:25.123275 systemd-logind[1483]: Watching system buttons on /dev/input/event2 (Power Button) May 17 00:21:25.123293 systemd-logind[1483]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:21:25.125217 systemd[1]: Starting sshkeys.service... May 17 00:21:25.127437 systemd[1]: Started systemd-logind.service - User Login Management. May 17 00:21:25.165560 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 17 00:21:25.175189 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 17 00:21:25.205606 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 00:21:25.219140 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 00:21:25.234101 coreos-metadata[1552]: May 17 00:21:25.234 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 May 17 00:21:25.235241 coreos-metadata[1552]: May 17 00:21:25.235 INFO Fetch successful May 17 00:21:25.237685 unknown[1552]: wrote ssh authorized keys file for user: core May 17 00:21:25.239537 locksmithd[1514]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:21:25.248437 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:21:25.248833 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 00:21:25.259943 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 00:21:25.270208 update-ssh-keys[1564]: Updated "/home/core/.ssh/authorized_keys" May 17 00:21:25.270466 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 17 00:21:25.274512 systemd[1]: Finished sshkeys.service. May 17 00:21:25.275249 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 00:21:25.286865 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 00:21:25.293789 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 17 00:21:25.294235 systemd[1]: Reached target getty.target - Login Prompts. May 17 00:21:25.299373 containerd[1504]: time="2025-05-17T00:21:25.299305616Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 00:21:25.317473 containerd[1504]: time="2025-05-17T00:21:25.317228864Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:21:25.319239 containerd[1504]: time="2025-05-17T00:21:25.318499777Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:21:25.319239 containerd[1504]: time="2025-05-17T00:21:25.318526417Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:21:25.319239 containerd[1504]: time="2025-05-17T00:21:25.318539331Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:21:25.319239 containerd[1504]: time="2025-05-17T00:21:25.318672811Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 00:21:25.319239 containerd[1504]: time="2025-05-17T00:21:25.318688761Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 00:21:25.319239 containerd[1504]: time="2025-05-17T00:21:25.318733825Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:21:25.319239 containerd[1504]: time="2025-05-17T00:21:25.318743934Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:21:25.320288 containerd[1504]: time="2025-05-17T00:21:25.320189965Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:21:25.320288 containerd[1504]: time="2025-05-17T00:21:25.320232155Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:21:25.320288 containerd[1504]: time="2025-05-17T00:21:25.320250820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:21:25.320288 containerd[1504]: time="2025-05-17T00:21:25.320260578Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:21:25.320427 containerd[1504]: time="2025-05-17T00:21:25.320372959Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:21:25.320595 containerd[1504]: time="2025-05-17T00:21:25.320568095Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:21:25.320807 containerd[1504]: time="2025-05-17T00:21:25.320723115Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:21:25.320807 containerd[1504]: time="2025-05-17T00:21:25.320740288Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:21:25.320807 containerd[1504]: time="2025-05-17T00:21:25.320802634Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:21:25.320865 containerd[1504]: time="2025-05-17T00:21:25.320838963Z" level=info msg="metadata content store policy set" policy=shared May 17 00:21:25.324979 containerd[1504]: time="2025-05-17T00:21:25.324365205Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:21:25.324979 containerd[1504]: time="2025-05-17T00:21:25.324404628Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:21:25.324979 containerd[1504]: time="2025-05-17T00:21:25.324418534Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 00:21:25.324979 containerd[1504]: time="2025-05-17T00:21:25.324430858Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 00:21:25.324979 containerd[1504]: time="2025-05-17T00:21:25.324443732Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:21:25.324979 containerd[1504]: time="2025-05-17T00:21:25.324542097Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:21:25.324979 containerd[1504]: time="2025-05-17T00:21:25.324802144Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:21:25.324979 containerd[1504]: time="2025-05-17T00:21:25.324897312Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 00:21:25.324979 containerd[1504]: time="2025-05-17T00:21:25.324910527Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 00:21:25.324979 containerd[1504]: time="2025-05-17T00:21:25.324921899Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 00:21:25.324979 containerd[1504]: time="2025-05-17T00:21:25.324932769Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:21:25.324979 containerd[1504]: time="2025-05-17T00:21:25.324943008Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:21:25.324979 containerd[1504]: time="2025-05-17T00:21:25.324952927Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:21:25.324979 containerd[1504]: time="2025-05-17T00:21:25.324963848Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:21:25.325193 containerd[1504]: time="2025-05-17T00:21:25.324974838Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:21:25.325193 containerd[1504]: time="2025-05-17T00:21:25.324985227Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:21:25.325193 containerd[1504]: time="2025-05-17T00:21:25.324996679Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:21:25.325193 containerd[1504]: time="2025-05-17T00:21:25.325005315Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:21:25.325193 containerd[1504]: time="2025-05-17T00:21:25.325022527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:21:25.325193 containerd[1504]: time="2025-05-17T00:21:25.325033327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:21:25.325193 containerd[1504]: time="2025-05-17T00:21:25.325043607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:21:25.325193 containerd[1504]: time="2025-05-17T00:21:25.325056110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:21:25.325193 containerd[1504]: time="2025-05-17T00:21:25.325068103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:21:25.325193 containerd[1504]: time="2025-05-17T00:21:25.325078482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:21:25.325193 containerd[1504]: time="2025-05-17T00:21:25.325088161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:21:25.325193 containerd[1504]: time="2025-05-17T00:21:25.325097888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:21:25.325193 containerd[1504]: time="2025-05-17T00:21:25.325124789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 00:21:25.325193 containerd[1504]: time="2025-05-17T00:21:25.325137503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 00:21:25.325376 containerd[1504]: time="2025-05-17T00:21:25.325146831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:21:25.325376 containerd[1504]: time="2025-05-17T00:21:25.325155957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 00:21:25.325376 containerd[1504]: time="2025-05-17T00:21:25.325166307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:21:25.325376 containerd[1504]: time="2025-05-17T00:21:25.325178119Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 00:21:25.325376 containerd[1504]: time="2025-05-17T00:21:25.325194520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 00:21:25.325376 containerd[1504]: time="2025-05-17T00:21:25.325203927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:21:25.325376 containerd[1504]: time="2025-05-17T00:21:25.325213085Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:21:25.325376 containerd[1504]: time="2025-05-17T00:21:25.325262868Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:21:25.325376 containerd[1504]: time="2025-05-17T00:21:25.325279709Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 00:21:25.325376 containerd[1504]: time="2025-05-17T00:21:25.325289398Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:21:25.325376 containerd[1504]: time="2025-05-17T00:21:25.325298334Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 00:21:25.325376 containerd[1504]: time="2025-05-17T00:21:25.325306530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:21:25.325376 containerd[1504]: time="2025-05-17T00:21:25.325376080Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 00:21:25.325538 containerd[1504]: time="2025-05-17T00:21:25.325385538Z" level=info msg="NRI interface is disabled by configuration." May 17 00:21:25.325538 containerd[1504]: time="2025-05-17T00:21:25.325394495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:21:25.325673 containerd[1504]: time="2025-05-17T00:21:25.325598718Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:21:25.325797 containerd[1504]: time="2025-05-17T00:21:25.325675892Z" level=info msg="Connect containerd service" May 17 00:21:25.325797 containerd[1504]: time="2025-05-17T00:21:25.325701270Z" level=info msg="using legacy CRI server" May 17 00:21:25.325797 containerd[1504]: time="2025-05-17T00:21:25.325706851Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 00:21:25.325797 containerd[1504]: time="2025-05-17T00:21:25.325772053Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:21:25.326391 containerd[1504]: time="2025-05-17T00:21:25.326227366Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:21:25.326473 containerd[1504]: time="2025-05-17T00:21:25.326439705Z" level=info msg="Start subscribing containerd event" May 17 00:21:25.326494 containerd[1504]: time="2025-05-17T00:21:25.326483217Z" level=info msg="Start recovering state" May 17 00:21:25.326664 containerd[1504]: time="2025-05-17T00:21:25.326529143Z" level=info msg="Start event monitor" May 17 00:21:25.326664 containerd[1504]: time="2025-05-17T00:21:25.326544621Z" level=info msg="Start snapshots syncer" May 17 00:21:25.326664 containerd[1504]: time="2025-05-17T00:21:25.326551945Z" level=info msg="Start cni network conf syncer for default" May 17 00:21:25.326664 containerd[1504]: time="2025-05-17T00:21:25.326557957Z" level=info msg="Start streaming server" May 17 00:21:25.326757 containerd[1504]: time="2025-05-17T00:21:25.326738675Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:21:25.326797 containerd[1504]: time="2025-05-17T00:21:25.326781766Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:21:25.327060 containerd[1504]: time="2025-05-17T00:21:25.326826560Z" level=info msg="containerd successfully booted in 0.028185s" May 17 00:21:25.327714 systemd[1]: Started containerd.service - containerd container runtime. May 17 00:21:25.557295 tar[1497]: linux-amd64/README.md May 17 00:21:25.565902 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 00:21:25.595772 systemd-networkd[1393]: eth1: Gained IPv6LL May 17 00:21:25.596858 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. May 17 00:21:25.598533 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 00:21:25.599380 systemd[1]: Reached target network-online.target - Network is Online. May 17 00:21:25.613227 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:21:25.617449 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 00:21:25.635144 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 00:21:25.979827 systemd-networkd[1393]: eth0: Gained IPv6LL May 17 00:21:25.980471 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. May 17 00:21:26.393901 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:26.394859 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 00:21:26.397244 systemd[1]: Startup finished in 1.240s (kernel) + 6.755s (initrd) + 3.997s (userspace) = 11.993s. May 17 00:21:26.403557 (kubelet)[1596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:21:26.898759 kubelet[1596]: E0517 00:21:26.898684 1596 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:21:26.901567 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:21:26.901726 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:21:36.911373 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:21:36.916899 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:21:36.999520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:37.002611 (kubelet)[1616]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:21:37.046790 kubelet[1616]: E0517 00:21:37.046694 1616 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:21:37.049578 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:21:37.049776 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:21:47.161606 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:21:47.168073 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:21:47.306407 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:47.309328 (kubelet)[1631]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:21:47.354500 kubelet[1631]: E0517 00:21:47.354453 1631 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:21:47.356925 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:21:47.357044 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:21:57.041760 systemd-resolved[1349]: Clock change detected. Flushing caches. May 17 00:21:57.041930 systemd-timesyncd[1374]: Contacted time server 93.241.86.156:123 (2.flatcar.pool.ntp.org). May 17 00:21:57.042013 systemd-timesyncd[1374]: Initial clock synchronization to Sat 2025-05-17 00:21:57.041650 UTC. May 17 00:21:58.094999 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 17 00:21:58.100430 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:21:58.208453 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:58.211810 (kubelet)[1646]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:21:58.248404 kubelet[1646]: E0517 00:21:58.248344 1646 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:21:58.250992 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:21:58.251162 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:22:08.344801 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 17 00:22:08.350537 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:22:08.439653 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:22:08.443150 (kubelet)[1661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:22:08.475311 kubelet[1661]: E0517 00:22:08.475193 1661 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:22:08.477602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:22:08.477719 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:22:11.133073 update_engine[1484]: I20250517 00:22:11.132941 1484 update_attempter.cc:509] Updating boot flags... May 17 00:22:11.165352 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1677) May 17 00:22:11.219299 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1677) May 17 00:22:11.259110 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1677) May 17 00:22:18.595298 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 17 00:22:18.601550 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:22:18.719959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:22:18.725659 (kubelet)[1696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:22:18.763838 kubelet[1696]: E0517 00:22:18.763762 1696 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:22:18.766492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:22:18.766665 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:22:28.844833 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 17 00:22:28.850660 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:22:28.960601 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:22:28.963862 (kubelet)[1713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:22:28.996374 kubelet[1713]: E0517 00:22:28.996323 1713 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:22:28.998889 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:22:28.999076 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:22:39.094856 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. May 17 00:22:39.100496 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:22:39.235967 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:22:39.241049 (kubelet)[1728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:22:39.281823 kubelet[1728]: E0517 00:22:39.281742 1728 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:22:39.284704 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:22:39.284868 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:22:49.344846 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. May 17 00:22:49.351627 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:22:49.465305 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:22:49.478704 (kubelet)[1744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:22:49.518629 kubelet[1744]: E0517 00:22:49.518554 1744 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:22:49.521152 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:22:49.521308 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:22:59.594760 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. May 17 00:22:59.610504 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:22:59.699550 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:22:59.710496 (kubelet)[1759]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:22:59.741238 kubelet[1759]: E0517 00:22:59.741160 1759 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:22:59.743005 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:22:59.743146 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:23:02.140774 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 00:23:02.145498 systemd[1]: Started sshd@0-37.27.204.183:22-139.178.89.65:60642.service - OpenSSH per-connection server daemon (139.178.89.65:60642). May 17 00:23:03.117134 sshd[1767]: Accepted publickey for core from 139.178.89.65 port 60642 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:23:03.119661 sshd[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:03.128193 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 00:23:03.137691 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 00:23:03.140094 systemd-logind[1483]: New session 1 of user core. May 17 00:23:03.153612 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 00:23:03.165641 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 00:23:03.169031 (systemd)[1771]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:23:03.275544 systemd[1771]: Queued start job for default target default.target. May 17 00:23:03.286146 systemd[1771]: Created slice app.slice - User Application Slice. May 17 00:23:03.286174 systemd[1771]: Reached target paths.target - Paths. May 17 00:23:03.286185 systemd[1771]: Reached target timers.target - Timers. May 17 00:23:03.287309 systemd[1771]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 00:23:03.298219 systemd[1771]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 00:23:03.298342 systemd[1771]: Reached target sockets.target - Sockets. May 17 00:23:03.298357 systemd[1771]: Reached target basic.target - Basic System. May 17 00:23:03.298387 systemd[1771]: Reached target default.target - Main User Target. May 17 00:23:03.298411 systemd[1771]: Startup finished in 121ms. May 17 00:23:03.298656 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 00:23:03.305390 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 00:23:03.991314 systemd[1]: Started sshd@1-37.27.204.183:22-139.178.89.65:60656.service - OpenSSH per-connection server daemon (139.178.89.65:60656). May 17 00:23:04.960453 sshd[1782]: Accepted publickey for core from 139.178.89.65 port 60656 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:23:04.962334 sshd[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:04.968017 systemd-logind[1483]: New session 2 of user core. May 17 00:23:04.973718 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 00:23:05.635884 sshd[1782]: pam_unix(sshd:session): session closed for user core May 17 00:23:05.639787 systemd[1]: sshd@1-37.27.204.183:22-139.178.89.65:60656.service: Deactivated successfully. May 17 00:23:05.641290 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:23:05.641951 systemd-logind[1483]: Session 2 logged out. Waiting for processes to exit. May 17 00:23:05.643037 systemd-logind[1483]: Removed session 2. May 17 00:23:05.804876 systemd[1]: Started sshd@2-37.27.204.183:22-139.178.89.65:60658.service - OpenSSH per-connection server daemon (139.178.89.65:60658). May 17 00:23:06.766948 sshd[1789]: Accepted publickey for core from 139.178.89.65 port 60658 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:23:06.768664 sshd[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:06.774403 systemd-logind[1483]: New session 3 of user core. May 17 00:23:06.780625 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 00:23:07.435442 sshd[1789]: pam_unix(sshd:session): session closed for user core May 17 00:23:07.438505 systemd[1]: sshd@2-37.27.204.183:22-139.178.89.65:60658.service: Deactivated successfully. May 17 00:23:07.440322 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:23:07.441419 systemd-logind[1483]: Session 3 logged out. Waiting for processes to exit. May 17 00:23:07.442635 systemd-logind[1483]: Removed session 3. May 17 00:23:07.606597 systemd[1]: Started sshd@3-37.27.204.183:22-139.178.89.65:46502.service - OpenSSH per-connection server daemon (139.178.89.65:46502). May 17 00:23:08.571562 sshd[1796]: Accepted publickey for core from 139.178.89.65 port 46502 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:23:08.573284 sshd[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:08.578427 systemd-logind[1483]: New session 4 of user core. May 17 00:23:08.584493 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 00:23:09.244795 sshd[1796]: pam_unix(sshd:session): session closed for user core May 17 00:23:09.247905 systemd[1]: sshd@3-37.27.204.183:22-139.178.89.65:46502.service: Deactivated successfully. May 17 00:23:09.249698 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:23:09.250449 systemd-logind[1483]: Session 4 logged out. Waiting for processes to exit. May 17 00:23:09.251376 systemd-logind[1483]: Removed session 4. May 17 00:23:09.411919 systemd[1]: Started sshd@4-37.27.204.183:22-139.178.89.65:46508.service - OpenSSH per-connection server daemon (139.178.89.65:46508). May 17 00:23:09.844851 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. May 17 00:23:09.850694 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:23:09.938971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:23:09.941780 (kubelet)[1813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:23:09.975161 kubelet[1813]: E0517 00:23:09.975117 1813 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:23:09.977423 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:23:09.977564 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:23:10.381347 sshd[1803]: Accepted publickey for core from 139.178.89.65 port 46508 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:23:10.382768 sshd[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:10.387292 systemd-logind[1483]: New session 5 of user core. May 17 00:23:10.398484 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 00:23:10.907503 sudo[1821]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:23:10.907820 sudo[1821]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:23:10.920297 sudo[1821]: pam_unix(sudo:session): session closed for user root May 17 00:23:11.078981 sshd[1803]: pam_unix(sshd:session): session closed for user core May 17 00:23:11.083281 systemd-logind[1483]: Session 5 logged out. Waiting for processes to exit. May 17 00:23:11.084154 systemd[1]: sshd@4-37.27.204.183:22-139.178.89.65:46508.service: Deactivated successfully. May 17 00:23:11.086058 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:23:11.087342 systemd-logind[1483]: Removed session 5. May 17 00:23:11.245372 systemd[1]: Started sshd@5-37.27.204.183:22-139.178.89.65:46524.service - OpenSSH per-connection server daemon (139.178.89.65:46524). May 17 00:23:12.210772 sshd[1826]: Accepted publickey for core from 139.178.89.65 port 46524 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:23:12.212167 sshd[1826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:12.216751 systemd-logind[1483]: New session 6 of user core. May 17 00:23:12.222439 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 00:23:12.727068 sudo[1830]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:23:12.727377 sudo[1830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:23:12.730599 sudo[1830]: pam_unix(sudo:session): session closed for user root May 17 00:23:12.735968 sudo[1829]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:23:12.736271 sudo[1829]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:23:12.748508 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 00:23:12.750989 auditctl[1833]: No rules May 17 00:23:12.751380 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:23:12.751604 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 00:23:12.753990 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:23:12.776017 augenrules[1851]: No rules May 17 00:23:12.777037 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:23:12.778343 sudo[1829]: pam_unix(sudo:session): session closed for user root May 17 00:23:12.935956 sshd[1826]: pam_unix(sshd:session): session closed for user core May 17 00:23:12.938843 systemd[1]: sshd@5-37.27.204.183:22-139.178.89.65:46524.service: Deactivated successfully. May 17 00:23:12.940426 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:23:12.941520 systemd-logind[1483]: Session 6 logged out. Waiting for processes to exit. May 17 00:23:12.942856 systemd-logind[1483]: Removed session 6. May 17 00:23:13.101436 systemd[1]: Started sshd@6-37.27.204.183:22-139.178.89.65:46538.service - OpenSSH per-connection server daemon (139.178.89.65:46538). May 17 00:23:14.080070 sshd[1859]: Accepted publickey for core from 139.178.89.65 port 46538 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:23:14.081732 sshd[1859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:14.086475 systemd-logind[1483]: New session 7 of user core. May 17 00:23:14.097399 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 00:23:14.593698 sudo[1862]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:23:14.593979 sudo[1862]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:23:14.844551 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 00:23:14.844831 (dockerd)[1877]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 00:23:15.081399 dockerd[1877]: time="2025-05-17T00:23:15.081319223Z" level=info msg="Starting up" May 17 00:23:15.139908 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3939156750-merged.mount: Deactivated successfully. May 17 00:23:15.165283 dockerd[1877]: time="2025-05-17T00:23:15.165078968Z" level=info msg="Loading containers: start." May 17 00:23:15.253300 kernel: Initializing XFRM netlink socket May 17 00:23:15.321046 systemd-networkd[1393]: docker0: Link UP May 17 00:23:15.343502 dockerd[1877]: time="2025-05-17T00:23:15.343450128Z" level=info msg="Loading containers: done." May 17 00:23:15.355195 dockerd[1877]: time="2025-05-17T00:23:15.355147777Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:23:15.355375 dockerd[1877]: time="2025-05-17T00:23:15.355262543Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 00:23:15.355375 dockerd[1877]: time="2025-05-17T00:23:15.355340679Z" level=info msg="Daemon has completed initialization" May 17 00:23:15.380763 dockerd[1877]: time="2025-05-17T00:23:15.380696573Z" level=info msg="API listen on /run/docker.sock" May 17 00:23:15.380994 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 00:23:16.137038 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck761832425-merged.mount: Deactivated successfully. May 17 00:23:16.416463 containerd[1504]: time="2025-05-17T00:23:16.416323169Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 17 00:23:16.960296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount435324832.mount: Deactivated successfully. May 17 00:23:18.266272 containerd[1504]: time="2025-05-17T00:23:18.266198131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:18.267302 containerd[1504]: time="2025-05-17T00:23:18.267234817Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=28797905" May 17 00:23:18.268059 containerd[1504]: time="2025-05-17T00:23:18.268012429Z" level=info msg="ImageCreate event name:\"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:18.270354 containerd[1504]: time="2025-05-17T00:23:18.270307199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:18.271339 containerd[1504]: time="2025-05-17T00:23:18.271097423Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"28794611\" in 1.854730902s" May 17 00:23:18.271339 containerd[1504]: time="2025-05-17T00:23:18.271129344Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 17 00:23:18.271871 containerd[1504]: time="2025-05-17T00:23:18.271840008Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 17 00:23:19.643560 containerd[1504]: time="2025-05-17T00:23:19.643477386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:19.644537 containerd[1504]: time="2025-05-17T00:23:19.644493364Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=24782545" May 17 00:23:19.645500 containerd[1504]: time="2025-05-17T00:23:19.645460812Z" level=info msg="ImageCreate event name:\"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:19.647956 containerd[1504]: time="2025-05-17T00:23:19.647910994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:19.648932 containerd[1504]: time="2025-05-17T00:23:19.648825411Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"26384363\" in 1.376939267s" May 17 00:23:19.648932 containerd[1504]: time="2025-05-17T00:23:19.648855507Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 17 00:23:19.649616 containerd[1504]: time="2025-05-17T00:23:19.649594275Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 17 00:23:20.094832 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. May 17 00:23:20.100484 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:23:20.190908 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:23:20.199546 (kubelet)[2080]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:23:20.245139 kubelet[2080]: E0517 00:23:20.245073 2080 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:23:20.247518 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:23:20.247659 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:23:20.849528 containerd[1504]: time="2025-05-17T00:23:20.849452427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:20.850466 containerd[1504]: time="2025-05-17T00:23:20.850411959Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=19176085" May 17 00:23:20.851174 containerd[1504]: time="2025-05-17T00:23:20.851118706Z" level=info msg="ImageCreate event name:\"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:20.853611 containerd[1504]: time="2025-05-17T00:23:20.853569469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:20.854631 containerd[1504]: time="2025-05-17T00:23:20.854518892Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"20777921\" in 1.204896775s" May 17 00:23:20.854631 containerd[1504]: time="2025-05-17T00:23:20.854546905Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 17 00:23:20.855196 containerd[1504]: time="2025-05-17T00:23:20.855052784Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 17 00:23:21.932691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3339968795.mount: Deactivated successfully. May 17 00:23:22.216344 containerd[1504]: time="2025-05-17T00:23:22.216158371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:22.217359 containerd[1504]: time="2025-05-17T00:23:22.217308701Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=30892900" May 17 00:23:22.218381 containerd[1504]: time="2025-05-17T00:23:22.218320861Z" level=info msg="ImageCreate event name:\"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:22.220162 containerd[1504]: time="2025-05-17T00:23:22.220109781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:22.220794 containerd[1504]: time="2025-05-17T00:23:22.220550638Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"30891891\" in 1.365247093s" May 17 00:23:22.220794 containerd[1504]: time="2025-05-17T00:23:22.220581796Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 17 00:23:22.221120 containerd[1504]: time="2025-05-17T00:23:22.221078208Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:23:22.736982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4083034532.mount: Deactivated successfully. May 17 00:23:23.403274 containerd[1504]: time="2025-05-17T00:23:23.403192134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:23.404395 containerd[1504]: time="2025-05-17T00:23:23.404357943Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565335" May 17 00:23:23.405500 containerd[1504]: time="2025-05-17T00:23:23.405458089Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:23.411315 containerd[1504]: time="2025-05-17T00:23:23.411266362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:23.412284 containerd[1504]: time="2025-05-17T00:23:23.411757545Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.190646575s" May 17 00:23:23.412284 containerd[1504]: time="2025-05-17T00:23:23.411787702Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 00:23:23.413966 containerd[1504]: time="2025-05-17T00:23:23.412700556Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:23:23.874176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1180337175.mount: Deactivated successfully. May 17 00:23:23.881199 containerd[1504]: time="2025-05-17T00:23:23.881118016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:23.882153 containerd[1504]: time="2025-05-17T00:23:23.882102194Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" May 17 00:23:23.883142 containerd[1504]: time="2025-05-17T00:23:23.883090830Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:23.885229 containerd[1504]: time="2025-05-17T00:23:23.885171626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:23.886024 containerd[1504]: time="2025-05-17T00:23:23.885861963Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 471.825089ms" May 17 00:23:23.886024 containerd[1504]: time="2025-05-17T00:23:23.885894033Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:23:23.886540 containerd[1504]: time="2025-05-17T00:23:23.886405402Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 17 00:23:24.443414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount965428749.mount: Deactivated successfully. May 17 00:23:26.059453 containerd[1504]: time="2025-05-17T00:23:26.059380599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:26.060986 containerd[1504]: time="2025-05-17T00:23:26.060927733Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551430" May 17 00:23:26.061811 containerd[1504]: time="2025-05-17T00:23:26.061760175Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:26.064996 containerd[1504]: time="2025-05-17T00:23:26.064900199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:26.067014 containerd[1504]: time="2025-05-17T00:23:26.066363385Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.179926954s" May 17 00:23:26.067014 containerd[1504]: time="2025-05-17T00:23:26.066424190Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 17 00:23:28.376892 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:23:28.386608 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:23:28.436175 systemd[1]: Reloading requested from client PID 2237 ('systemctl') (unit session-7.scope)... May 17 00:23:28.436409 systemd[1]: Reloading... May 17 00:23:28.557305 zram_generator::config[2283]: No configuration found. May 17 00:23:28.660066 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:23:28.736716 systemd[1]: Reloading finished in 299 ms. May 17 00:23:28.775989 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 00:23:28.776102 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 00:23:28.776428 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:23:28.777995 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:23:28.878365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:23:28.884758 (kubelet)[2331]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:23:28.936196 kubelet[2331]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:23:28.937091 kubelet[2331]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:23:28.937091 kubelet[2331]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:23:28.937091 kubelet[2331]: I0517 00:23:28.936676 2331 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:23:29.347373 kubelet[2331]: I0517 00:23:29.347323 2331 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 00:23:29.347373 kubelet[2331]: I0517 00:23:29.347355 2331 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:23:29.347676 kubelet[2331]: I0517 00:23:29.347651 2331 server.go:954] "Client rotation is on, will bootstrap in background" May 17 00:23:29.386706 kubelet[2331]: E0517 00:23:29.386177 2331 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://37.27.204.183:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 37.27.204.183:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:29.386706 kubelet[2331]: I0517 00:23:29.386483 2331 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:23:29.403928 kubelet[2331]: E0517 00:23:29.403827 2331 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:23:29.403928 kubelet[2331]: I0517 00:23:29.403912 2331 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:23:29.408901 kubelet[2331]: I0517 00:23:29.408828 2331 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:23:29.410817 kubelet[2331]: I0517 00:23:29.410746 2331 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:23:29.411068 kubelet[2331]: I0517 00:23:29.410804 2331 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-82e895e080","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:23:29.413401 kubelet[2331]: I0517 00:23:29.413364 2331 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:23:29.413401 kubelet[2331]: I0517 00:23:29.413396 2331 container_manager_linux.go:304] "Creating device plugin manager" May 17 00:23:29.414659 kubelet[2331]: I0517 00:23:29.414626 2331 state_mem.go:36] "Initialized new in-memory state store" May 17 00:23:29.418266 kubelet[2331]: I0517 00:23:29.418207 2331 kubelet.go:446] "Attempting to sync node with API server" May 17 00:23:29.418266 kubelet[2331]: I0517 00:23:29.418266 2331 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:23:29.418342 kubelet[2331]: I0517 00:23:29.418295 2331 kubelet.go:352] "Adding apiserver pod source" May 17 00:23:29.418342 kubelet[2331]: I0517 00:23:29.418311 2331 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:23:29.422637 kubelet[2331]: W0517 00:23:29.422223 2331 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://37.27.204.183:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-82e895e080&limit=500&resourceVersion=0": dial tcp 37.27.204.183:6443: connect: connection refused May 17 00:23:29.422914 kubelet[2331]: E0517 00:23:29.422713 2331 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://37.27.204.183:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-82e895e080&limit=500&resourceVersion=0\": dial tcp 37.27.204.183:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:29.423935 kubelet[2331]: I0517 00:23:29.423891 2331 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:23:29.426973 kubelet[2331]: I0517 00:23:29.426453 2331 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:23:29.426973 kubelet[2331]: W0517 00:23:29.426516 2331 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:23:29.432346 kubelet[2331]: I0517 00:23:29.432325 2331 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:23:29.432617 kubelet[2331]: I0517 00:23:29.432559 2331 server.go:1287] "Started kubelet" May 17 00:23:29.436119 kubelet[2331]: W0517 00:23:29.436053 2331 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://37.27.204.183:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 37.27.204.183:6443: connect: connection refused May 17 00:23:29.436119 kubelet[2331]: E0517 00:23:29.436119 2331 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://37.27.204.183:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 37.27.204.183:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:29.436733 kubelet[2331]: I0517 00:23:29.436278 2331 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:23:29.439457 kubelet[2331]: I0517 00:23:29.439314 2331 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:23:29.439704 kubelet[2331]: I0517 00:23:29.439674 2331 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:23:29.442622 kubelet[2331]: I0517 00:23:29.442432 2331 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:23:29.448866 kubelet[2331]: E0517 00:23:29.441448 2331 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://37.27.204.183:6443/api/v1/namespaces/default/events\": dial tcp 37.27.204.183:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-n-82e895e080.184028b41b08ae75 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-n-82e895e080,UID:ci-4081-3-3-n-82e895e080,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-82e895e080,},FirstTimestamp:2025-05-17 00:23:29.432530549 +0000 UTC m=+0.544305111,LastTimestamp:2025-05-17 00:23:29.432530549 +0000 UTC m=+0.544305111,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-82e895e080,}" May 17 00:23:29.451267 kubelet[2331]: I0517 00:23:29.449534 2331 server.go:479] "Adding debug handlers to kubelet server" May 17 00:23:29.451267 kubelet[2331]: I0517 00:23:29.449855 2331 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:23:29.451267 kubelet[2331]: I0517 00:23:29.450378 2331 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:23:29.452079 kubelet[2331]: E0517 00:23:29.452049 2331 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-82e895e080\" not found" May 17 00:23:29.454077 kubelet[2331]: I0517 00:23:29.454055 2331 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:23:29.454386 kubelet[2331]: I0517 00:23:29.454373 2331 reconciler.go:26] "Reconciler: start to sync state" May 17 00:23:29.458887 kubelet[2331]: W0517 00:23:29.458806 2331 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://37.27.204.183:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 37.27.204.183:6443: connect: connection refused May 17 00:23:29.458941 kubelet[2331]: E0517 00:23:29.458894 2331 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://37.27.204.183:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 37.27.204.183:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:29.459013 kubelet[2331]: E0517 00:23:29.458983 2331 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.204.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-82e895e080?timeout=10s\": dial tcp 37.27.204.183:6443: connect: connection refused" interval="200ms" May 17 00:23:29.459264 kubelet[2331]: I0517 00:23:29.459217 2331 factory.go:221] Registration of the systemd container factory successfully May 17 00:23:29.459418 kubelet[2331]: I0517 00:23:29.459389 2331 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:23:29.461540 kubelet[2331]: E0517 00:23:29.461503 2331 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:23:29.461666 kubelet[2331]: I0517 00:23:29.461642 2331 factory.go:221] Registration of the containerd container factory successfully May 17 00:23:29.471313 kubelet[2331]: I0517 00:23:29.471268 2331 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:23:29.472315 kubelet[2331]: I0517 00:23:29.472291 2331 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:23:29.472696 kubelet[2331]: I0517 00:23:29.472411 2331 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 00:23:29.472696 kubelet[2331]: I0517 00:23:29.472452 2331 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:23:29.472696 kubelet[2331]: I0517 00:23:29.472461 2331 kubelet.go:2382] "Starting kubelet main sync loop" May 17 00:23:29.472696 kubelet[2331]: E0517 00:23:29.472504 2331 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:23:29.478985 kubelet[2331]: W0517 00:23:29.478901 2331 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://37.27.204.183:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 37.27.204.183:6443: connect: connection refused May 17 00:23:29.479078 kubelet[2331]: E0517 00:23:29.479010 2331 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://37.27.204.183:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 37.27.204.183:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:29.488619 kubelet[2331]: I0517 00:23:29.488597 2331 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:23:29.489000 kubelet[2331]: I0517 00:23:29.488748 2331 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:23:29.489000 kubelet[2331]: I0517 00:23:29.488770 2331 state_mem.go:36] "Initialized new in-memory state store" May 17 00:23:29.491234 kubelet[2331]: I0517 00:23:29.491222 2331 policy_none.go:49] "None policy: Start" May 17 00:23:29.491321 kubelet[2331]: I0517 00:23:29.491313 2331 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:23:29.491379 kubelet[2331]: I0517 00:23:29.491367 2331 state_mem.go:35] "Initializing new in-memory state store" May 17 00:23:29.497284 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 17 00:23:29.508662 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 17 00:23:29.513451 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 17 00:23:29.526471 kubelet[2331]: I0517 00:23:29.525702 2331 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:23:29.526471 kubelet[2331]: I0517 00:23:29.525977 2331 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:23:29.526471 kubelet[2331]: I0517 00:23:29.525991 2331 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:23:29.529190 kubelet[2331]: I0517 00:23:29.529176 2331 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:23:29.529328 kubelet[2331]: E0517 00:23:29.529235 2331 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:23:29.529421 kubelet[2331]: E0517 00:23:29.529396 2331 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-3-n-82e895e080\" not found" May 17 00:23:29.580429 systemd[1]: Created slice kubepods-burstable-pod6e72c3389c76372bf2bedee926a923a6.slice - libcontainer container kubepods-burstable-pod6e72c3389c76372bf2bedee926a923a6.slice. May 17 00:23:29.601481 kubelet[2331]: E0517 00:23:29.599851 2331 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-82e895e080\" not found" node="ci-4081-3-3-n-82e895e080" May 17 00:23:29.605738 systemd[1]: Created slice kubepods-burstable-podf3f7f4776c98bacf4e57c95582b64d30.slice - libcontainer container kubepods-burstable-podf3f7f4776c98bacf4e57c95582b64d30.slice. May 17 00:23:29.609931 kubelet[2331]: E0517 00:23:29.609890 2331 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-82e895e080\" not found" node="ci-4081-3-3-n-82e895e080" May 17 00:23:29.612366 systemd[1]: Created slice kubepods-burstable-pod81052a80b17c95356767e10e54687d5d.slice - libcontainer container kubepods-burstable-pod81052a80b17c95356767e10e54687d5d.slice. May 17 00:23:29.614104 kubelet[2331]: E0517 00:23:29.614024 2331 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-82e895e080\" not found" node="ci-4081-3-3-n-82e895e080" May 17 00:23:29.629345 kubelet[2331]: I0517 00:23:29.629291 2331 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-82e895e080" May 17 00:23:29.629741 kubelet[2331]: E0517 00:23:29.629703 2331 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://37.27.204.183:6443/api/v1/nodes\": dial tcp 37.27.204.183:6443: connect: connection refused" node="ci-4081-3-3-n-82e895e080" May 17 00:23:29.656435 kubelet[2331]: I0517 00:23:29.656379 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6e72c3389c76372bf2bedee926a923a6-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-82e895e080\" (UID: \"6e72c3389c76372bf2bedee926a923a6\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-82e895e080" May 17 00:23:29.656435 kubelet[2331]: I0517 00:23:29.656445 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f3f7f4776c98bacf4e57c95582b64d30-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-82e895e080\" (UID: \"f3f7f4776c98bacf4e57c95582b64d30\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-82e895e080" May 17 00:23:29.656919 kubelet[2331]: I0517 00:23:29.656481 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f3f7f4776c98bacf4e57c95582b64d30-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-82e895e080\" (UID: \"f3f7f4776c98bacf4e57c95582b64d30\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-82e895e080" May 17 00:23:29.656919 kubelet[2331]: I0517 00:23:29.656537 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f3f7f4776c98bacf4e57c95582b64d30-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-82e895e080\" (UID: \"f3f7f4776c98bacf4e57c95582b64d30\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-82e895e080" May 17 00:23:29.656919 kubelet[2331]: I0517 00:23:29.656631 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f3f7f4776c98bacf4e57c95582b64d30-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-82e895e080\" (UID: \"f3f7f4776c98bacf4e57c95582b64d30\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-82e895e080" May 17 00:23:29.656919 kubelet[2331]: I0517 00:23:29.656653 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/81052a80b17c95356767e10e54687d5d-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-82e895e080\" (UID: \"81052a80b17c95356767e10e54687d5d\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-82e895e080" May 17 00:23:29.656919 kubelet[2331]: I0517 00:23:29.656667 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6e72c3389c76372bf2bedee926a923a6-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-82e895e080\" (UID: \"6e72c3389c76372bf2bedee926a923a6\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-82e895e080" May 17 00:23:29.657083 kubelet[2331]: I0517 00:23:29.656682 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e72c3389c76372bf2bedee926a923a6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-82e895e080\" (UID: \"6e72c3389c76372bf2bedee926a923a6\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-82e895e080" May 17 00:23:29.657083 kubelet[2331]: I0517 00:23:29.656696 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f3f7f4776c98bacf4e57c95582b64d30-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-82e895e080\" (UID: \"f3f7f4776c98bacf4e57c95582b64d30\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-82e895e080" May 17 00:23:29.660013 kubelet[2331]: E0517 00:23:29.659943 2331 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.204.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-82e895e080?timeout=10s\": dial tcp 37.27.204.183:6443: connect: connection refused" interval="400ms" May 17 00:23:29.832404 kubelet[2331]: I0517 00:23:29.832353 2331 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-82e895e080" May 17 00:23:29.832738 kubelet[2331]: E0517 00:23:29.832691 2331 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://37.27.204.183:6443/api/v1/nodes\": dial tcp 37.27.204.183:6443: connect: connection refused" node="ci-4081-3-3-n-82e895e080" May 17 00:23:29.906174 containerd[1504]: time="2025-05-17T00:23:29.905855911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-82e895e080,Uid:6e72c3389c76372bf2bedee926a923a6,Namespace:kube-system,Attempt:0,}" May 17 00:23:29.914864 containerd[1504]: time="2025-05-17T00:23:29.914747097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-82e895e080,Uid:f3f7f4776c98bacf4e57c95582b64d30,Namespace:kube-system,Attempt:0,}" May 17 00:23:29.915380 containerd[1504]: time="2025-05-17T00:23:29.915355018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-82e895e080,Uid:81052a80b17c95356767e10e54687d5d,Namespace:kube-system,Attempt:0,}" May 17 00:23:30.060885 kubelet[2331]: E0517 00:23:30.060768 2331 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.204.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-82e895e080?timeout=10s\": dial tcp 37.27.204.183:6443: connect: connection refused" interval="800ms" May 17 00:23:30.234749 kubelet[2331]: I0517 00:23:30.234646 2331 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-82e895e080" May 17 00:23:30.235292 kubelet[2331]: E0517 00:23:30.235229 2331 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://37.27.204.183:6443/api/v1/nodes\": dial tcp 37.27.204.183:6443: connect: connection refused" node="ci-4081-3-3-n-82e895e080" May 17 00:23:30.358622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3523492659.mount: Deactivated successfully. May 17 00:23:30.365052 containerd[1504]: time="2025-05-17T00:23:30.365005292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:23:30.366478 containerd[1504]: time="2025-05-17T00:23:30.366361107Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:23:30.367272 containerd[1504]: time="2025-05-17T00:23:30.367222052Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:23:30.367924 containerd[1504]: time="2025-05-17T00:23:30.367871381Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:23:30.369112 containerd[1504]: time="2025-05-17T00:23:30.369074408Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:23:30.369618 containerd[1504]: time="2025-05-17T00:23:30.369512160Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" May 17 00:23:30.370429 containerd[1504]: time="2025-05-17T00:23:30.370395798Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:23:30.372534 containerd[1504]: time="2025-05-17T00:23:30.372484699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:23:30.374513 containerd[1504]: time="2025-05-17T00:23:30.373287535Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 467.349779ms" May 17 00:23:30.374513 containerd[1504]: time="2025-05-17T00:23:30.374433826Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 459.530416ms" May 17 00:23:30.379882 containerd[1504]: time="2025-05-17T00:23:30.379785668Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 464.373353ms" May 17 00:23:30.451473 kubelet[2331]: W0517 00:23:30.451390 2331 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://37.27.204.183:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 37.27.204.183:6443: connect: connection refused May 17 00:23:30.451473 kubelet[2331]: E0517 00:23:30.451440 2331 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://37.27.204.183:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 37.27.204.183:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:30.478341 containerd[1504]: time="2025-05-17T00:23:30.477185828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:30.478341 containerd[1504]: time="2025-05-17T00:23:30.477224812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:30.478341 containerd[1504]: time="2025-05-17T00:23:30.477233738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:30.478341 containerd[1504]: time="2025-05-17T00:23:30.477317746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:30.481000 containerd[1504]: time="2025-05-17T00:23:30.480755206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:30.481000 containerd[1504]: time="2025-05-17T00:23:30.480811823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:30.481000 containerd[1504]: time="2025-05-17T00:23:30.480846558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:30.481000 containerd[1504]: time="2025-05-17T00:23:30.480922151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:30.486121 containerd[1504]: time="2025-05-17T00:23:30.485235835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:30.486121 containerd[1504]: time="2025-05-17T00:23:30.485807839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:30.486121 containerd[1504]: time="2025-05-17T00:23:30.485818799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:30.487148 containerd[1504]: time="2025-05-17T00:23:30.486067355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:30.502411 systemd[1]: Started cri-containerd-df7b37f341ae9c8d84d0d931efbe62c6e2fbc8f887e5bbf83ccab4ae10cce2b6.scope - libcontainer container df7b37f341ae9c8d84d0d931efbe62c6e2fbc8f887e5bbf83ccab4ae10cce2b6. May 17 00:23:30.506351 systemd[1]: Started cri-containerd-6bd138677e4cefaee9dae64cb83fbaeef028548f969167dd00f5c9b5fe3e7d2b.scope - libcontainer container 6bd138677e4cefaee9dae64cb83fbaeef028548f969167dd00f5c9b5fe3e7d2b. May 17 00:23:30.526402 systemd[1]: Started cri-containerd-38529f19f5076f3e37bcecfeb6f1c127e9bee5c3334651d21676765f64c123be.scope - libcontainer container 38529f19f5076f3e37bcecfeb6f1c127e9bee5c3334651d21676765f64c123be. May 17 00:23:30.567419 containerd[1504]: time="2025-05-17T00:23:30.567372731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-82e895e080,Uid:6e72c3389c76372bf2bedee926a923a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bd138677e4cefaee9dae64cb83fbaeef028548f969167dd00f5c9b5fe3e7d2b\"" May 17 00:23:30.580274 containerd[1504]: time="2025-05-17T00:23:30.579467038Z" level=info msg="CreateContainer within sandbox \"6bd138677e4cefaee9dae64cb83fbaeef028548f969167dd00f5c9b5fe3e7d2b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:23:30.592558 containerd[1504]: time="2025-05-17T00:23:30.592525452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-82e895e080,Uid:f3f7f4776c98bacf4e57c95582b64d30,Namespace:kube-system,Attempt:0,} returns sandbox id \"38529f19f5076f3e37bcecfeb6f1c127e9bee5c3334651d21676765f64c123be\"" May 17 00:23:30.593108 containerd[1504]: time="2025-05-17T00:23:30.593092617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-82e895e080,Uid:81052a80b17c95356767e10e54687d5d,Namespace:kube-system,Attempt:0,} returns sandbox id \"df7b37f341ae9c8d84d0d931efbe62c6e2fbc8f887e5bbf83ccab4ae10cce2b6\"" May 17 00:23:30.597994 containerd[1504]: time="2025-05-17T00:23:30.597974338Z" level=info msg="CreateContainer within sandbox \"df7b37f341ae9c8d84d0d931efbe62c6e2fbc8f887e5bbf83ccab4ae10cce2b6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:23:30.598200 containerd[1504]: time="2025-05-17T00:23:30.598040973Z" level=info msg="CreateContainer within sandbox \"38529f19f5076f3e37bcecfeb6f1c127e9bee5c3334651d21676765f64c123be\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:23:30.601590 containerd[1504]: time="2025-05-17T00:23:30.601552824Z" level=info msg="CreateContainer within sandbox \"6bd138677e4cefaee9dae64cb83fbaeef028548f969167dd00f5c9b5fe3e7d2b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"59eaef6353f4f372396b05224a655adaffcd90fac94b33aa66e75fc04d21e8d9\"" May 17 00:23:30.601968 containerd[1504]: time="2025-05-17T00:23:30.601946302Z" level=info msg="StartContainer for \"59eaef6353f4f372396b05224a655adaffcd90fac94b33aa66e75fc04d21e8d9\"" May 17 00:23:30.618872 containerd[1504]: time="2025-05-17T00:23:30.618819185Z" level=info msg="CreateContainer within sandbox \"df7b37f341ae9c8d84d0d931efbe62c6e2fbc8f887e5bbf83ccab4ae10cce2b6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5ee9258fbbec14724b3337ae5d874c02c2264de7a62e8dc13e72ea0e090ab9b5\"" May 17 00:23:30.620285 containerd[1504]: time="2025-05-17T00:23:30.619586314Z" level=info msg="StartContainer for \"5ee9258fbbec14724b3337ae5d874c02c2264de7a62e8dc13e72ea0e090ab9b5\"" May 17 00:23:30.625535 containerd[1504]: time="2025-05-17T00:23:30.625512406Z" level=info msg="CreateContainer within sandbox \"38529f19f5076f3e37bcecfeb6f1c127e9bee5c3334651d21676765f64c123be\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"154696a55a29b6e2212cbee42e0802b76d596d120b0b8518d9291c47d9dad359\"" May 17 00:23:30.626002 systemd[1]: Started cri-containerd-59eaef6353f4f372396b05224a655adaffcd90fac94b33aa66e75fc04d21e8d9.scope - libcontainer container 59eaef6353f4f372396b05224a655adaffcd90fac94b33aa66e75fc04d21e8d9. May 17 00:23:30.627199 containerd[1504]: time="2025-05-17T00:23:30.626436861Z" level=info msg="StartContainer for \"154696a55a29b6e2212cbee42e0802b76d596d120b0b8518d9291c47d9dad359\"" May 17 00:23:30.649447 systemd[1]: Started cri-containerd-154696a55a29b6e2212cbee42e0802b76d596d120b0b8518d9291c47d9dad359.scope - libcontainer container 154696a55a29b6e2212cbee42e0802b76d596d120b0b8518d9291c47d9dad359. May 17 00:23:30.664373 systemd[1]: Started cri-containerd-5ee9258fbbec14724b3337ae5d874c02c2264de7a62e8dc13e72ea0e090ab9b5.scope - libcontainer container 5ee9258fbbec14724b3337ae5d874c02c2264de7a62e8dc13e72ea0e090ab9b5. May 17 00:23:30.677753 containerd[1504]: time="2025-05-17T00:23:30.677675655Z" level=info msg="StartContainer for \"59eaef6353f4f372396b05224a655adaffcd90fac94b33aa66e75fc04d21e8d9\" returns successfully" May 17 00:23:30.708395 containerd[1504]: time="2025-05-17T00:23:30.708303971Z" level=info msg="StartContainer for \"154696a55a29b6e2212cbee42e0802b76d596d120b0b8518d9291c47d9dad359\" returns successfully" May 17 00:23:30.716481 containerd[1504]: time="2025-05-17T00:23:30.716449577Z" level=info msg="StartContainer for \"5ee9258fbbec14724b3337ae5d874c02c2264de7a62e8dc13e72ea0e090ab9b5\" returns successfully" May 17 00:23:30.741886 kubelet[2331]: W0517 00:23:30.741728 2331 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://37.27.204.183:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 37.27.204.183:6443: connect: connection refused May 17 00:23:30.741886 kubelet[2331]: E0517 00:23:30.741800 2331 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://37.27.204.183:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 37.27.204.183:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:30.861828 kubelet[2331]: E0517 00:23:30.861783 2331 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.204.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-82e895e080?timeout=10s\": dial tcp 37.27.204.183:6443: connect: connection refused" interval="1.6s" May 17 00:23:30.898993 kubelet[2331]: W0517 00:23:30.898911 2331 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://37.27.204.183:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 37.27.204.183:6443: connect: connection refused May 17 00:23:30.899174 kubelet[2331]: E0517 00:23:30.899002 2331 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://37.27.204.183:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 37.27.204.183:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:31.037967 kubelet[2331]: I0517 00:23:31.037290 2331 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-82e895e080" May 17 00:23:31.493892 kubelet[2331]: E0517 00:23:31.493637 2331 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-82e895e080\" not found" node="ci-4081-3-3-n-82e895e080" May 17 00:23:31.495394 kubelet[2331]: E0517 00:23:31.494443 2331 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-82e895e080\" not found" node="ci-4081-3-3-n-82e895e080" May 17 00:23:31.499962 kubelet[2331]: E0517 00:23:31.499942 2331 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-82e895e080\" not found" node="ci-4081-3-3-n-82e895e080" May 17 00:23:32.049485 kubelet[2331]: I0517 00:23:32.049428 2331 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-3-n-82e895e080" May 17 00:23:32.052800 kubelet[2331]: I0517 00:23:32.052451 2331 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-3-n-82e895e080" May 17 00:23:32.107763 kubelet[2331]: E0517 00:23:32.107711 2331 kubelet.go:3196] "Failed creating a mirror pod" err="namespaces \"kube-system\" not found" pod="kube-system/kube-apiserver-ci-4081-3-3-n-82e895e080" May 17 00:23:32.107763 kubelet[2331]: I0517 00:23:32.107769 2331 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-82e895e080" May 17 00:23:32.163985 kubelet[2331]: E0517 00:23:32.163708 2331 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-3-n-82e895e080\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-82e895e080" May 17 00:23:32.163985 kubelet[2331]: I0517 00:23:32.163749 2331 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-3-n-82e895e080" May 17 00:23:32.165436 kubelet[2331]: E0517 00:23:32.165404 2331 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-3-n-82e895e080\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-3-n-82e895e080" May 17 00:23:32.437701 kubelet[2331]: I0517 00:23:32.437650 2331 apiserver.go:52] "Watching apiserver" May 17 00:23:32.454411 kubelet[2331]: I0517 00:23:32.454349 2331 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:23:32.498829 kubelet[2331]: I0517 00:23:32.498449 2331 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-82e895e080" May 17 00:23:32.498829 kubelet[2331]: I0517 00:23:32.498469 2331 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-3-n-82e895e080" May 17 00:23:32.498829 kubelet[2331]: I0517 00:23:32.498683 2331 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-3-n-82e895e080" May 17 00:23:32.501220 kubelet[2331]: E0517 00:23:32.500877 2331 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-3-n-82e895e080\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-3-n-82e895e080" May 17 00:23:32.501220 kubelet[2331]: E0517 00:23:32.501071 2331 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-3-n-82e895e080\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-82e895e080" May 17 00:23:32.501660 kubelet[2331]: E0517 00:23:32.501535 2331 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-3-n-82e895e080\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-3-n-82e895e080" May 17 00:23:33.837882 systemd[1]: Reloading requested from client PID 2603 ('systemctl') (unit session-7.scope)... May 17 00:23:33.837901 systemd[1]: Reloading... May 17 00:23:33.915352 zram_generator::config[2646]: No configuration found. May 17 00:23:34.004918 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:23:34.074575 systemd[1]: Reloading finished in 236 ms. May 17 00:23:34.108267 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:23:34.128961 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:23:34.129135 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:23:34.135605 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:23:34.247111 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:23:34.259723 (kubelet)[2693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:23:34.304238 kubelet[2693]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:23:34.304238 kubelet[2693]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:23:34.304238 kubelet[2693]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:23:34.304601 kubelet[2693]: I0517 00:23:34.304305 2693 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:23:34.310267 kubelet[2693]: I0517 00:23:34.310231 2693 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 00:23:34.312259 kubelet[2693]: I0517 00:23:34.310356 2693 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:23:34.312259 kubelet[2693]: I0517 00:23:34.310562 2693 server.go:954] "Client rotation is on, will bootstrap in background" May 17 00:23:34.312805 kubelet[2693]: I0517 00:23:34.312780 2693 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:23:34.319070 kubelet[2693]: I0517 00:23:34.319035 2693 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:23:34.328517 kubelet[2693]: E0517 00:23:34.328462 2693 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:23:34.328517 kubelet[2693]: I0517 00:23:34.328495 2693 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:23:34.331125 kubelet[2693]: I0517 00:23:34.331091 2693 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:23:34.332000 kubelet[2693]: I0517 00:23:34.331950 2693 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:23:34.332182 kubelet[2693]: I0517 00:23:34.331986 2693 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-82e895e080","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:23:34.332182 kubelet[2693]: I0517 00:23:34.332175 2693 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:23:34.332182 kubelet[2693]: I0517 00:23:34.332183 2693 container_manager_linux.go:304] "Creating device plugin manager" May 17 00:23:34.332366 kubelet[2693]: I0517 00:23:34.332224 2693 state_mem.go:36] "Initialized new in-memory state store" May 17 00:23:34.332427 kubelet[2693]: I0517 00:23:34.332386 2693 kubelet.go:446] "Attempting to sync node with API server" May 17 00:23:34.332427 kubelet[2693]: I0517 00:23:34.332413 2693 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:23:34.333366 kubelet[2693]: I0517 00:23:34.332431 2693 kubelet.go:352] "Adding apiserver pod source" May 17 00:23:34.333366 kubelet[2693]: I0517 00:23:34.332441 2693 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:23:34.335627 kubelet[2693]: I0517 00:23:34.335607 2693 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:23:34.335987 kubelet[2693]: I0517 00:23:34.335968 2693 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:23:34.336394 kubelet[2693]: I0517 00:23:34.336371 2693 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:23:34.336444 kubelet[2693]: I0517 00:23:34.336406 2693 server.go:1287] "Started kubelet" May 17 00:23:34.348404 kubelet[2693]: I0517 00:23:34.348375 2693 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:23:34.353430 kubelet[2693]: I0517 00:23:34.353362 2693 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:23:34.354166 kubelet[2693]: I0517 00:23:34.354147 2693 server.go:479] "Adding debug handlers to kubelet server" May 17 00:23:34.355425 kubelet[2693]: E0517 00:23:34.355381 2693 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-82e895e080\" not found" May 17 00:23:34.355750 kubelet[2693]: I0517 00:23:34.355733 2693 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:23:34.357269 kubelet[2693]: I0517 00:23:34.357204 2693 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:23:34.357435 kubelet[2693]: I0517 00:23:34.357416 2693 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:23:34.357747 kubelet[2693]: I0517 00:23:34.357736 2693 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:23:34.358123 kubelet[2693]: I0517 00:23:34.358109 2693 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:23:34.359464 kubelet[2693]: I0517 00:23:34.359403 2693 reconciler.go:26] "Reconciler: start to sync state" May 17 00:23:34.367277 kubelet[2693]: I0517 00:23:34.366872 2693 factory.go:221] Registration of the systemd container factory successfully May 17 00:23:34.368738 kubelet[2693]: I0517 00:23:34.367689 2693 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:23:34.369190 kubelet[2693]: E0517 00:23:34.369162 2693 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:23:34.370692 kubelet[2693]: I0517 00:23:34.370667 2693 factory.go:221] Registration of the containerd container factory successfully May 17 00:23:34.372005 kubelet[2693]: I0517 00:23:34.371928 2693 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:23:34.373550 kubelet[2693]: I0517 00:23:34.373531 2693 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:23:34.373863 kubelet[2693]: I0517 00:23:34.373645 2693 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 00:23:34.373863 kubelet[2693]: I0517 00:23:34.373675 2693 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:23:34.373863 kubelet[2693]: I0517 00:23:34.373684 2693 kubelet.go:2382] "Starting kubelet main sync loop" May 17 00:23:34.373863 kubelet[2693]: E0517 00:23:34.373731 2693 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:23:34.417205 kubelet[2693]: I0517 00:23:34.417161 2693 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:23:34.417205 kubelet[2693]: I0517 00:23:34.417179 2693 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:23:34.417205 kubelet[2693]: I0517 00:23:34.417195 2693 state_mem.go:36] "Initialized new in-memory state store" May 17 00:23:34.417481 kubelet[2693]: I0517 00:23:34.417369 2693 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:23:34.417481 kubelet[2693]: I0517 00:23:34.417380 2693 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:23:34.417481 kubelet[2693]: I0517 00:23:34.417398 2693 policy_none.go:49] "None policy: Start" May 17 00:23:34.417481 kubelet[2693]: I0517 00:23:34.417407 2693 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:23:34.417481 kubelet[2693]: I0517 00:23:34.417415 2693 state_mem.go:35] "Initializing new in-memory state store" May 17 00:23:34.417618 kubelet[2693]: I0517 00:23:34.417495 2693 state_mem.go:75] "Updated machine memory state" May 17 00:23:34.422790 kubelet[2693]: I0517 00:23:34.422589 2693 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:23:34.422790 kubelet[2693]: I0517 00:23:34.422777 2693 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:23:34.423184 kubelet[2693]: I0517 00:23:34.422789 2693 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:23:34.423184 kubelet[2693]: I0517 00:23:34.423028 2693 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:23:34.424584 kubelet[2693]: E0517 00:23:34.424570 2693 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:23:34.476608 kubelet[2693]: I0517 00:23:34.476301 2693 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-3-n-82e895e080" May 17 00:23:34.476608 kubelet[2693]: I0517 00:23:34.476374 2693 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-3-n-82e895e080" May 17 00:23:34.476608 kubelet[2693]: I0517 00:23:34.476301 2693 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-82e895e080" May 17 00:23:34.527685 kubelet[2693]: I0517 00:23:34.527500 2693 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-82e895e080" May 17 00:23:34.534865 kubelet[2693]: I0517 00:23:34.534829 2693 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-3-n-82e895e080" May 17 00:23:34.535075 kubelet[2693]: I0517 00:23:34.534926 2693 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-3-n-82e895e080" May 17 00:23:34.561203 kubelet[2693]: I0517 00:23:34.561135 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6e72c3389c76372bf2bedee926a923a6-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-82e895e080\" (UID: \"6e72c3389c76372bf2bedee926a923a6\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-82e895e080" May 17 00:23:34.561203 kubelet[2693]: I0517 00:23:34.561186 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6e72c3389c76372bf2bedee926a923a6-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-82e895e080\" (UID: \"6e72c3389c76372bf2bedee926a923a6\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-82e895e080" May 17 00:23:34.561203 kubelet[2693]: I0517 00:23:34.561210 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e72c3389c76372bf2bedee926a923a6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-82e895e080\" (UID: \"6e72c3389c76372bf2bedee926a923a6\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-82e895e080" May 17 00:23:34.561462 kubelet[2693]: I0517 00:23:34.561233 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f3f7f4776c98bacf4e57c95582b64d30-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-82e895e080\" (UID: \"f3f7f4776c98bacf4e57c95582b64d30\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-82e895e080" May 17 00:23:34.561462 kubelet[2693]: I0517 00:23:34.561288 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f3f7f4776c98bacf4e57c95582b64d30-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-82e895e080\" (UID: \"f3f7f4776c98bacf4e57c95582b64d30\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-82e895e080" May 17 00:23:34.561462 kubelet[2693]: I0517 00:23:34.561308 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f3f7f4776c98bacf4e57c95582b64d30-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-82e895e080\" (UID: \"f3f7f4776c98bacf4e57c95582b64d30\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-82e895e080" May 17 00:23:34.561462 kubelet[2693]: I0517 00:23:34.561329 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f3f7f4776c98bacf4e57c95582b64d30-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-82e895e080\" (UID: \"f3f7f4776c98bacf4e57c95582b64d30\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-82e895e080" May 17 00:23:34.561462 kubelet[2693]: I0517 00:23:34.561350 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f3f7f4776c98bacf4e57c95582b64d30-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-82e895e080\" (UID: \"f3f7f4776c98bacf4e57c95582b64d30\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-82e895e080" May 17 00:23:34.561611 kubelet[2693]: I0517 00:23:34.561373 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/81052a80b17c95356767e10e54687d5d-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-82e895e080\" (UID: \"81052a80b17c95356767e10e54687d5d\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-82e895e080" May 17 00:23:35.333584 kubelet[2693]: I0517 00:23:35.333528 2693 apiserver.go:52] "Watching apiserver" May 17 00:23:35.360092 kubelet[2693]: I0517 00:23:35.360027 2693 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:23:35.397418 kubelet[2693]: I0517 00:23:35.397074 2693 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-3-n-82e895e080" May 17 00:23:35.397668 kubelet[2693]: I0517 00:23:35.397478 2693 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-82e895e080" May 17 00:23:35.405559 kubelet[2693]: E0517 00:23:35.404586 2693 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-3-n-82e895e080\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-82e895e080" May 17 00:23:35.405832 kubelet[2693]: E0517 00:23:35.405590 2693 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-3-n-82e895e080\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-3-n-82e895e080" May 17 00:23:35.422960 kubelet[2693]: I0517 00:23:35.422684 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-82e895e080" podStartSLOduration=1.422663064 podStartE2EDuration="1.422663064s" podCreationTimestamp="2025-05-17 00:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:23:35.422634722 +0000 UTC m=+1.156186615" watchObservedRunningTime="2025-05-17 00:23:35.422663064 +0000 UTC m=+1.156214957" May 17 00:23:35.429932 kubelet[2693]: I0517 00:23:35.429855 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-3-n-82e895e080" podStartSLOduration=1.429839232 podStartE2EDuration="1.429839232s" podCreationTimestamp="2025-05-17 00:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:23:35.42957305 +0000 UTC m=+1.163124973" watchObservedRunningTime="2025-05-17 00:23:35.429839232 +0000 UTC m=+1.163391125" May 17 00:23:35.446144 kubelet[2693]: I0517 00:23:35.446051 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-3-n-82e895e080" podStartSLOduration=1.446032208 podStartE2EDuration="1.446032208s" podCreationTimestamp="2025-05-17 00:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:23:35.43759171 +0000 UTC m=+1.171143624" watchObservedRunningTime="2025-05-17 00:23:35.446032208 +0000 UTC m=+1.179584101" May 17 00:23:40.632355 kubelet[2693]: I0517 00:23:40.632309 2693 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:23:40.632720 containerd[1504]: time="2025-05-17T00:23:40.632696107Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:23:40.632926 kubelet[2693]: I0517 00:23:40.632841 2693 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:23:41.542567 systemd[1]: Created slice kubepods-besteffort-pod031f873f_0738_4b86_b5d4_1231c15b687d.slice - libcontainer container kubepods-besteffort-pod031f873f_0738_4b86_b5d4_1231c15b687d.slice. May 17 00:23:41.604464 kubelet[2693]: I0517 00:23:41.604266 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/031f873f-0738-4b86-b5d4-1231c15b687d-kube-proxy\") pod \"kube-proxy-5gt7j\" (UID: \"031f873f-0738-4b86-b5d4-1231c15b687d\") " pod="kube-system/kube-proxy-5gt7j" May 17 00:23:41.604464 kubelet[2693]: I0517 00:23:41.604329 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/031f873f-0738-4b86-b5d4-1231c15b687d-lib-modules\") pod \"kube-proxy-5gt7j\" (UID: \"031f873f-0738-4b86-b5d4-1231c15b687d\") " pod="kube-system/kube-proxy-5gt7j" May 17 00:23:41.604464 kubelet[2693]: I0517 00:23:41.604350 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrmpt\" (UniqueName: \"kubernetes.io/projected/031f873f-0738-4b86-b5d4-1231c15b687d-kube-api-access-xrmpt\") pod \"kube-proxy-5gt7j\" (UID: \"031f873f-0738-4b86-b5d4-1231c15b687d\") " pod="kube-system/kube-proxy-5gt7j" May 17 00:23:41.604464 kubelet[2693]: I0517 00:23:41.604370 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/031f873f-0738-4b86-b5d4-1231c15b687d-xtables-lock\") pod \"kube-proxy-5gt7j\" (UID: \"031f873f-0738-4b86-b5d4-1231c15b687d\") " pod="kube-system/kube-proxy-5gt7j" May 17 00:23:41.815211 systemd[1]: Created slice kubepods-besteffort-pod58205410_49e6_41d6_88f2_f22514fb1303.slice - libcontainer container kubepods-besteffort-pod58205410_49e6_41d6_88f2_f22514fb1303.slice. May 17 00:23:41.849512 containerd[1504]: time="2025-05-17T00:23:41.849425323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5gt7j,Uid:031f873f-0738-4b86-b5d4-1231c15b687d,Namespace:kube-system,Attempt:0,}" May 17 00:23:41.873990 containerd[1504]: time="2025-05-17T00:23:41.873910394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:41.874210 containerd[1504]: time="2025-05-17T00:23:41.873964485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:41.874371 containerd[1504]: time="2025-05-17T00:23:41.874193297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:41.874519 containerd[1504]: time="2025-05-17T00:23:41.874471150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:41.896527 systemd[1]: Started cri-containerd-5c82768e2ab825d1f35e37e2da7d5d74de1c992fc492531d49bf41b798064863.scope - libcontainer container 5c82768e2ab825d1f35e37e2da7d5d74de1c992fc492531d49bf41b798064863. May 17 00:23:41.906708 kubelet[2693]: I0517 00:23:41.906642 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpmpb\" (UniqueName: \"kubernetes.io/projected/58205410-49e6-41d6-88f2-f22514fb1303-kube-api-access-kpmpb\") pod \"tigera-operator-844669ff44-6st99\" (UID: \"58205410-49e6-41d6-88f2-f22514fb1303\") " pod="tigera-operator/tigera-operator-844669ff44-6st99" May 17 00:23:41.906708 kubelet[2693]: I0517 00:23:41.906705 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/58205410-49e6-41d6-88f2-f22514fb1303-var-lib-calico\") pod \"tigera-operator-844669ff44-6st99\" (UID: \"58205410-49e6-41d6-88f2-f22514fb1303\") " pod="tigera-operator/tigera-operator-844669ff44-6st99" May 17 00:23:41.919216 containerd[1504]: time="2025-05-17T00:23:41.919061849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5gt7j,Uid:031f873f-0738-4b86-b5d4-1231c15b687d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c82768e2ab825d1f35e37e2da7d5d74de1c992fc492531d49bf41b798064863\"" May 17 00:23:41.922980 containerd[1504]: time="2025-05-17T00:23:41.922924418Z" level=info msg="CreateContainer within sandbox \"5c82768e2ab825d1f35e37e2da7d5d74de1c992fc492531d49bf41b798064863\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:23:41.940869 containerd[1504]: time="2025-05-17T00:23:41.940750262Z" level=info msg="CreateContainer within sandbox \"5c82768e2ab825d1f35e37e2da7d5d74de1c992fc492531d49bf41b798064863\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1f3d8919f1e3228a6037c95da54ed979ac980fe90f91ce27ac3a64d1de94f662\"" May 17 00:23:41.942755 containerd[1504]: time="2025-05-17T00:23:41.942680520Z" level=info msg="StartContainer for \"1f3d8919f1e3228a6037c95da54ed979ac980fe90f91ce27ac3a64d1de94f662\"" May 17 00:23:41.967393 systemd[1]: Started cri-containerd-1f3d8919f1e3228a6037c95da54ed979ac980fe90f91ce27ac3a64d1de94f662.scope - libcontainer container 1f3d8919f1e3228a6037c95da54ed979ac980fe90f91ce27ac3a64d1de94f662. May 17 00:23:41.993777 containerd[1504]: time="2025-05-17T00:23:41.993725917Z" level=info msg="StartContainer for \"1f3d8919f1e3228a6037c95da54ed979ac980fe90f91ce27ac3a64d1de94f662\" returns successfully" May 17 00:23:42.120199 containerd[1504]: time="2025-05-17T00:23:42.120135574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-6st99,Uid:58205410-49e6-41d6-88f2-f22514fb1303,Namespace:tigera-operator,Attempt:0,}" May 17 00:23:42.146860 containerd[1504]: time="2025-05-17T00:23:42.146568183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:42.146860 containerd[1504]: time="2025-05-17T00:23:42.146647038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:42.146860 containerd[1504]: time="2025-05-17T00:23:42.146657408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:42.146860 containerd[1504]: time="2025-05-17T00:23:42.146727337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:42.161393 systemd[1]: Started cri-containerd-cf441aaaa4351fed2c48748dc05d9d6f319ddb4fe8b14351f001aef6f1b261b7.scope - libcontainer container cf441aaaa4351fed2c48748dc05d9d6f319ddb4fe8b14351f001aef6f1b261b7. May 17 00:23:42.207963 containerd[1504]: time="2025-05-17T00:23:42.207917699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-6st99,Uid:58205410-49e6-41d6-88f2-f22514fb1303,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"cf441aaaa4351fed2c48748dc05d9d6f319ddb4fe8b14351f001aef6f1b261b7\"" May 17 00:23:42.210252 containerd[1504]: time="2025-05-17T00:23:42.209521514Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 17 00:23:42.423652 kubelet[2693]: I0517 00:23:42.423509 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5gt7j" podStartSLOduration=1.423493555 podStartE2EDuration="1.423493555s" podCreationTimestamp="2025-05-17 00:23:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:23:42.423443262 +0000 UTC m=+8.156995165" watchObservedRunningTime="2025-05-17 00:23:42.423493555 +0000 UTC m=+8.157045438" May 17 00:23:42.717357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount603946849.mount: Deactivated successfully. May 17 00:23:43.975876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1478977206.mount: Deactivated successfully. May 17 00:23:44.372262 containerd[1504]: time="2025-05-17T00:23:44.372174746Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:44.373078 containerd[1504]: time="2025-05-17T00:23:44.372901872Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 17 00:23:44.373772 containerd[1504]: time="2025-05-17T00:23:44.373709436Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:44.377147 containerd[1504]: time="2025-05-17T00:23:44.376260175Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 2.166698096s" May 17 00:23:44.377147 containerd[1504]: time="2025-05-17T00:23:44.376296232Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 17 00:23:44.377147 containerd[1504]: time="2025-05-17T00:23:44.376671025Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:44.382768 containerd[1504]: time="2025-05-17T00:23:44.382729024Z" level=info msg="CreateContainer within sandbox \"cf441aaaa4351fed2c48748dc05d9d6f319ddb4fe8b14351f001aef6f1b261b7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 17 00:23:44.404279 containerd[1504]: time="2025-05-17T00:23:44.404213526Z" level=info msg="CreateContainer within sandbox \"cf441aaaa4351fed2c48748dc05d9d6f319ddb4fe8b14351f001aef6f1b261b7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"19a635ec77fddfeb38c578db7f113cb8477149c7ea14408d03dcb76d6f6ec060\"" May 17 00:23:44.405401 containerd[1504]: time="2025-05-17T00:23:44.405329520Z" level=info msg="StartContainer for \"19a635ec77fddfeb38c578db7f113cb8477149c7ea14408d03dcb76d6f6ec060\"" May 17 00:23:44.433410 systemd[1]: Started cri-containerd-19a635ec77fddfeb38c578db7f113cb8477149c7ea14408d03dcb76d6f6ec060.scope - libcontainer container 19a635ec77fddfeb38c578db7f113cb8477149c7ea14408d03dcb76d6f6ec060. May 17 00:23:44.464088 containerd[1504]: time="2025-05-17T00:23:44.464043204Z" level=info msg="StartContainer for \"19a635ec77fddfeb38c578db7f113cb8477149c7ea14408d03dcb76d6f6ec060\" returns successfully" May 17 00:23:47.127271 kubelet[2693]: I0517 00:23:47.124859 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-6st99" podStartSLOduration=3.95288569 podStartE2EDuration="6.124167847s" podCreationTimestamp="2025-05-17 00:23:41 +0000 UTC" firstStartedPulling="2025-05-17 00:23:42.209054209 +0000 UTC m=+7.942606093" lastFinishedPulling="2025-05-17 00:23:44.380336366 +0000 UTC m=+10.113888250" observedRunningTime="2025-05-17 00:23:45.435663289 +0000 UTC m=+11.169215182" watchObservedRunningTime="2025-05-17 00:23:47.124167847 +0000 UTC m=+12.857719730" May 17 00:23:50.029512 sudo[1862]: pam_unix(sudo:session): session closed for user root May 17 00:23:50.188072 sshd[1859]: pam_unix(sshd:session): session closed for user core May 17 00:23:50.191681 systemd[1]: sshd@6-37.27.204.183:22-139.178.89.65:46538.service: Deactivated successfully. May 17 00:23:50.194769 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:23:50.195103 systemd[1]: session-7.scope: Consumed 3.609s CPU time, 142.2M memory peak, 0B memory swap peak. May 17 00:23:50.196841 systemd-logind[1483]: Session 7 logged out. Waiting for processes to exit. May 17 00:23:50.198738 systemd-logind[1483]: Removed session 7. May 17 00:23:52.795991 systemd[1]: Created slice kubepods-besteffort-pod00f2e325_093a_45e2_a2a2_1152fe50c018.slice - libcontainer container kubepods-besteffort-pod00f2e325_093a_45e2_a2a2_1152fe50c018.slice. May 17 00:23:52.886693 kubelet[2693]: I0517 00:23:52.886596 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/00f2e325-093a-45e2-a2a2-1152fe50c018-typha-certs\") pod \"calico-typha-8d98699c-sf2j6\" (UID: \"00f2e325-093a-45e2-a2a2-1152fe50c018\") " pod="calico-system/calico-typha-8d98699c-sf2j6" May 17 00:23:52.886693 kubelet[2693]: I0517 00:23:52.886638 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p978\" (UniqueName: \"kubernetes.io/projected/00f2e325-093a-45e2-a2a2-1152fe50c018-kube-api-access-6p978\") pod \"calico-typha-8d98699c-sf2j6\" (UID: \"00f2e325-093a-45e2-a2a2-1152fe50c018\") " pod="calico-system/calico-typha-8d98699c-sf2j6" May 17 00:23:52.886693 kubelet[2693]: I0517 00:23:52.886659 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00f2e325-093a-45e2-a2a2-1152fe50c018-tigera-ca-bundle\") pod \"calico-typha-8d98699c-sf2j6\" (UID: \"00f2e325-093a-45e2-a2a2-1152fe50c018\") " pod="calico-system/calico-typha-8d98699c-sf2j6" May 17 00:23:53.101658 systemd[1]: Created slice kubepods-besteffort-pod98ad4d68_e439_4901_b67d_44deaba6e629.slice - libcontainer container kubepods-besteffort-pod98ad4d68_e439_4901_b67d_44deaba6e629.slice. May 17 00:23:53.116299 containerd[1504]: time="2025-05-17T00:23:53.116231096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8d98699c-sf2j6,Uid:00f2e325-093a-45e2-a2a2-1152fe50c018,Namespace:calico-system,Attempt:0,}" May 17 00:23:53.144549 containerd[1504]: time="2025-05-17T00:23:53.143991576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:53.144875 containerd[1504]: time="2025-05-17T00:23:53.144522912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:53.144875 containerd[1504]: time="2025-05-17T00:23:53.144756264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:53.145072 containerd[1504]: time="2025-05-17T00:23:53.144859466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:53.188432 kubelet[2693]: I0517 00:23:53.188114 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/98ad4d68-e439-4901-b67d-44deaba6e629-flexvol-driver-host\") pod \"calico-node-shnsn\" (UID: \"98ad4d68-e439-4901-b67d-44deaba6e629\") " pod="calico-system/calico-node-shnsn" May 17 00:23:53.191281 kubelet[2693]: I0517 00:23:53.189915 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/98ad4d68-e439-4901-b67d-44deaba6e629-node-certs\") pod \"calico-node-shnsn\" (UID: \"98ad4d68-e439-4901-b67d-44deaba6e629\") " pod="calico-system/calico-node-shnsn" May 17 00:23:53.191281 kubelet[2693]: I0517 00:23:53.189961 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/98ad4d68-e439-4901-b67d-44deaba6e629-cni-log-dir\") pod \"calico-node-shnsn\" (UID: \"98ad4d68-e439-4901-b67d-44deaba6e629\") " pod="calico-system/calico-node-shnsn" May 17 00:23:53.191281 kubelet[2693]: I0517 00:23:53.189989 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/98ad4d68-e439-4901-b67d-44deaba6e629-var-lib-calico\") pod \"calico-node-shnsn\" (UID: \"98ad4d68-e439-4901-b67d-44deaba6e629\") " pod="calico-system/calico-node-shnsn" May 17 00:23:53.191281 kubelet[2693]: I0517 00:23:53.190010 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98ad4d68-e439-4901-b67d-44deaba6e629-tigera-ca-bundle\") pod \"calico-node-shnsn\" (UID: \"98ad4d68-e439-4901-b67d-44deaba6e629\") " pod="calico-system/calico-node-shnsn" May 17 00:23:53.191281 kubelet[2693]: I0517 00:23:53.190035 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98ad4d68-e439-4901-b67d-44deaba6e629-xtables-lock\") pod \"calico-node-shnsn\" (UID: \"98ad4d68-e439-4901-b67d-44deaba6e629\") " pod="calico-system/calico-node-shnsn" May 17 00:23:53.191469 kubelet[2693]: I0517 00:23:53.190060 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/98ad4d68-e439-4901-b67d-44deaba6e629-cni-bin-dir\") pod \"calico-node-shnsn\" (UID: \"98ad4d68-e439-4901-b67d-44deaba6e629\") " pod="calico-system/calico-node-shnsn" May 17 00:23:53.191469 kubelet[2693]: I0517 00:23:53.190082 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/98ad4d68-e439-4901-b67d-44deaba6e629-var-run-calico\") pod \"calico-node-shnsn\" (UID: \"98ad4d68-e439-4901-b67d-44deaba6e629\") " pod="calico-system/calico-node-shnsn" May 17 00:23:53.191469 kubelet[2693]: I0517 00:23:53.190102 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdjww\" (UniqueName: \"kubernetes.io/projected/98ad4d68-e439-4901-b67d-44deaba6e629-kube-api-access-gdjww\") pod \"calico-node-shnsn\" (UID: \"98ad4d68-e439-4901-b67d-44deaba6e629\") " pod="calico-system/calico-node-shnsn" May 17 00:23:53.191469 kubelet[2693]: I0517 00:23:53.190122 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98ad4d68-e439-4901-b67d-44deaba6e629-lib-modules\") pod \"calico-node-shnsn\" (UID: \"98ad4d68-e439-4901-b67d-44deaba6e629\") " pod="calico-system/calico-node-shnsn" May 17 00:23:53.191469 kubelet[2693]: I0517 00:23:53.190140 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/98ad4d68-e439-4901-b67d-44deaba6e629-policysync\") pod \"calico-node-shnsn\" (UID: \"98ad4d68-e439-4901-b67d-44deaba6e629\") " pod="calico-system/calico-node-shnsn" May 17 00:23:53.191552 kubelet[2693]: I0517 00:23:53.190165 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/98ad4d68-e439-4901-b67d-44deaba6e629-cni-net-dir\") pod \"calico-node-shnsn\" (UID: \"98ad4d68-e439-4901-b67d-44deaba6e629\") " pod="calico-system/calico-node-shnsn" May 17 00:23:53.209678 systemd[1]: Started cri-containerd-d96df9ae5b1d1169e6b5803ab4602328a814cbf035c97fda9b0df40e87264fd9.scope - libcontainer container d96df9ae5b1d1169e6b5803ab4602328a814cbf035c97fda9b0df40e87264fd9. May 17 00:23:53.250316 containerd[1504]: time="2025-05-17T00:23:53.250205587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8d98699c-sf2j6,Uid:00f2e325-093a-45e2-a2a2-1152fe50c018,Namespace:calico-system,Attempt:0,} returns sandbox id \"d96df9ae5b1d1169e6b5803ab4602328a814cbf035c97fda9b0df40e87264fd9\"" May 17 00:23:53.267214 containerd[1504]: time="2025-05-17T00:23:53.267174252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 17 00:23:53.302257 kubelet[2693]: E0517 00:23:53.301903 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.302257 kubelet[2693]: W0517 00:23:53.301931 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.302520 kubelet[2693]: E0517 00:23:53.302489 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.337383 kubelet[2693]: E0517 00:23:53.336927 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ht64s" podUID="79f08894-50f6-4f34-ab06-f713767f2567" May 17 00:23:53.368200 kubelet[2693]: E0517 00:23:53.368069 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.368200 kubelet[2693]: W0517 00:23:53.368123 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.368200 kubelet[2693]: E0517 00:23:53.368146 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.368797 kubelet[2693]: E0517 00:23:53.368327 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.368797 kubelet[2693]: W0517 00:23:53.368348 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.368797 kubelet[2693]: E0517 00:23:53.368356 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.369179 kubelet[2693]: E0517 00:23:53.369165 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.369179 kubelet[2693]: W0517 00:23:53.369177 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.369410 kubelet[2693]: E0517 00:23:53.369186 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.369443 kubelet[2693]: E0517 00:23:53.369434 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.369584 kubelet[2693]: W0517 00:23:53.369442 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.369584 kubelet[2693]: E0517 00:23:53.369450 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.369897 kubelet[2693]: E0517 00:23:53.369860 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.369952 kubelet[2693]: W0517 00:23:53.369899 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.369952 kubelet[2693]: E0517 00:23:53.369908 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.370419 kubelet[2693]: E0517 00:23:53.370106 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.370419 kubelet[2693]: W0517 00:23:53.370113 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.370419 kubelet[2693]: E0517 00:23:53.370121 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.370419 kubelet[2693]: E0517 00:23:53.370317 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.370419 kubelet[2693]: W0517 00:23:53.370357 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.370419 kubelet[2693]: E0517 00:23:53.370366 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.370518 kubelet[2693]: E0517 00:23:53.370508 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.370518 kubelet[2693]: W0517 00:23:53.370516 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.370552 kubelet[2693]: E0517 00:23:53.370523 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.370711 kubelet[2693]: E0517 00:23:53.370667 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.370711 kubelet[2693]: W0517 00:23:53.370678 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.370711 kubelet[2693]: E0517 00:23:53.370686 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.371402 kubelet[2693]: E0517 00:23:53.370857 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.371402 kubelet[2693]: W0517 00:23:53.370865 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.371402 kubelet[2693]: E0517 00:23:53.370871 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.371402 kubelet[2693]: E0517 00:23:53.371035 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.371402 kubelet[2693]: W0517 00:23:53.371042 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.371402 kubelet[2693]: E0517 00:23:53.371050 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.371402 kubelet[2693]: E0517 00:23:53.371280 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.371402 kubelet[2693]: W0517 00:23:53.371289 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.371402 kubelet[2693]: E0517 00:23:53.371298 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.371627 kubelet[2693]: E0517 00:23:53.371497 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.371627 kubelet[2693]: W0517 00:23:53.371505 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.371627 kubelet[2693]: E0517 00:23:53.371512 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.371716 kubelet[2693]: E0517 00:23:53.371683 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.371716 kubelet[2693]: W0517 00:23:53.371695 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.371716 kubelet[2693]: E0517 00:23:53.371703 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.371878 kubelet[2693]: E0517 00:23:53.371862 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.371878 kubelet[2693]: W0517 00:23:53.371873 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.371926 kubelet[2693]: E0517 00:23:53.371881 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.372102 kubelet[2693]: E0517 00:23:53.372024 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.372102 kubelet[2693]: W0517 00:23:53.372033 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.372102 kubelet[2693]: E0517 00:23:53.372040 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.372231 kubelet[2693]: E0517 00:23:53.372211 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.372231 kubelet[2693]: W0517 00:23:53.372223 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.372231 kubelet[2693]: E0517 00:23:53.372231 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.372859 kubelet[2693]: E0517 00:23:53.372847 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.372859 kubelet[2693]: W0517 00:23:53.372857 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.372921 kubelet[2693]: E0517 00:23:53.372865 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.373035 kubelet[2693]: E0517 00:23:53.373024 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.373035 kubelet[2693]: W0517 00:23:53.373033 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.373082 kubelet[2693]: E0517 00:23:53.373040 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.373271 kubelet[2693]: E0517 00:23:53.373189 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.373271 kubelet[2693]: W0517 00:23:53.373200 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.373271 kubelet[2693]: E0517 00:23:53.373208 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.391888 kubelet[2693]: E0517 00:23:53.391857 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.391888 kubelet[2693]: W0517 00:23:53.391876 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.391888 kubelet[2693]: E0517 00:23:53.391893 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.392125 kubelet[2693]: I0517 00:23:53.391929 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/79f08894-50f6-4f34-ab06-f713767f2567-varrun\") pod \"csi-node-driver-ht64s\" (UID: \"79f08894-50f6-4f34-ab06-f713767f2567\") " pod="calico-system/csi-node-driver-ht64s" May 17 00:23:53.392125 kubelet[2693]: E0517 00:23:53.392092 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.392125 kubelet[2693]: W0517 00:23:53.392101 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.392125 kubelet[2693]: E0517 00:23:53.392113 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.392200 kubelet[2693]: I0517 00:23:53.392126 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79f08894-50f6-4f34-ab06-f713767f2567-kubelet-dir\") pod \"csi-node-driver-ht64s\" (UID: \"79f08894-50f6-4f34-ab06-f713767f2567\") " pod="calico-system/csi-node-driver-ht64s" May 17 00:23:53.392315 kubelet[2693]: E0517 00:23:53.392296 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.392315 kubelet[2693]: W0517 00:23:53.392308 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.392423 kubelet[2693]: E0517 00:23:53.392349 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.392529 kubelet[2693]: I0517 00:23:53.392372 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/79f08894-50f6-4f34-ab06-f713767f2567-registration-dir\") pod \"csi-node-driver-ht64s\" (UID: \"79f08894-50f6-4f34-ab06-f713767f2567\") " pod="calico-system/csi-node-driver-ht64s" May 17 00:23:53.392579 kubelet[2693]: E0517 00:23:53.392570 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.392604 kubelet[2693]: W0517 00:23:53.392579 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.392604 kubelet[2693]: E0517 00:23:53.392590 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.392783 kubelet[2693]: E0517 00:23:53.392759 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.392783 kubelet[2693]: W0517 00:23:53.392772 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.392783 kubelet[2693]: E0517 00:23:53.392781 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.392934 kubelet[2693]: E0517 00:23:53.392916 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.392934 kubelet[2693]: W0517 00:23:53.392928 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.392975 kubelet[2693]: E0517 00:23:53.392938 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.393086 kubelet[2693]: E0517 00:23:53.393069 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.393086 kubelet[2693]: W0517 00:23:53.393080 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.393086 kubelet[2693]: E0517 00:23:53.393087 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.393225 kubelet[2693]: E0517 00:23:53.393209 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.393225 kubelet[2693]: W0517 00:23:53.393220 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.393285 kubelet[2693]: E0517 00:23:53.393229 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.393285 kubelet[2693]: I0517 00:23:53.393258 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/79f08894-50f6-4f34-ab06-f713767f2567-socket-dir\") pod \"csi-node-driver-ht64s\" (UID: \"79f08894-50f6-4f34-ab06-f713767f2567\") " pod="calico-system/csi-node-driver-ht64s" May 17 00:23:53.393453 kubelet[2693]: E0517 00:23:53.393431 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.393453 kubelet[2693]: W0517 00:23:53.393443 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.393453 kubelet[2693]: E0517 00:23:53.393450 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.393522 kubelet[2693]: I0517 00:23:53.393463 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbpw8\" (UniqueName: \"kubernetes.io/projected/79f08894-50f6-4f34-ab06-f713767f2567-kube-api-access-wbpw8\") pod \"csi-node-driver-ht64s\" (UID: \"79f08894-50f6-4f34-ab06-f713767f2567\") " pod="calico-system/csi-node-driver-ht64s" May 17 00:23:53.393624 kubelet[2693]: E0517 00:23:53.393606 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.393624 kubelet[2693]: W0517 00:23:53.393618 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.393728 kubelet[2693]: E0517 00:23:53.393629 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.393782 kubelet[2693]: E0517 00:23:53.393760 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.393782 kubelet[2693]: W0517 00:23:53.393772 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.393782 kubelet[2693]: E0517 00:23:53.393779 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.393915 kubelet[2693]: E0517 00:23:53.393897 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.393915 kubelet[2693]: W0517 00:23:53.393909 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.393959 kubelet[2693]: E0517 00:23:53.393917 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.394055 kubelet[2693]: E0517 00:23:53.394039 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.394055 kubelet[2693]: W0517 00:23:53.394050 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.394142 kubelet[2693]: E0517 00:23:53.394056 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.394181 kubelet[2693]: E0517 00:23:53.394168 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.394181 kubelet[2693]: W0517 00:23:53.394179 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.394220 kubelet[2693]: E0517 00:23:53.394185 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.394406 kubelet[2693]: E0517 00:23:53.394396 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.394435 kubelet[2693]: W0517 00:23:53.394407 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.394435 kubelet[2693]: E0517 00:23:53.394416 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.407275 containerd[1504]: time="2025-05-17T00:23:53.407216743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-shnsn,Uid:98ad4d68-e439-4901-b67d-44deaba6e629,Namespace:calico-system,Attempt:0,}" May 17 00:23:53.431066 containerd[1504]: time="2025-05-17T00:23:53.430975487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:53.433476 containerd[1504]: time="2025-05-17T00:23:53.433282488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:53.433476 containerd[1504]: time="2025-05-17T00:23:53.433300612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:53.433476 containerd[1504]: time="2025-05-17T00:23:53.433394286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:53.451398 systemd[1]: Started cri-containerd-2186c3b0d493fdcb833e738efe867bab62816ecb2773d0523b700c8601122b31.scope - libcontainer container 2186c3b0d493fdcb833e738efe867bab62816ecb2773d0523b700c8601122b31. May 17 00:23:53.485425 containerd[1504]: time="2025-05-17T00:23:53.485235718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-shnsn,Uid:98ad4d68-e439-4901-b67d-44deaba6e629,Namespace:calico-system,Attempt:0,} returns sandbox id \"2186c3b0d493fdcb833e738efe867bab62816ecb2773d0523b700c8601122b31\"" May 17 00:23:53.494387 kubelet[2693]: E0517 00:23:53.494149 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.494387 kubelet[2693]: W0517 00:23:53.494173 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.494387 kubelet[2693]: E0517 00:23:53.494199 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.494859 kubelet[2693]: E0517 00:23:53.494668 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.494859 kubelet[2693]: W0517 00:23:53.494686 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.494859 kubelet[2693]: E0517 00:23:53.494720 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.495461 kubelet[2693]: E0517 00:23:53.495301 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.495461 kubelet[2693]: W0517 00:23:53.495312 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.495461 kubelet[2693]: E0517 00:23:53.495352 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.495824 kubelet[2693]: E0517 00:23:53.495621 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.495824 kubelet[2693]: W0517 00:23:53.495635 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.495824 kubelet[2693]: E0517 00:23:53.495726 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.496646 kubelet[2693]: E0517 00:23:53.496470 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.496646 kubelet[2693]: W0517 00:23:53.496483 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.496646 kubelet[2693]: E0517 00:23:53.496509 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.496797 kubelet[2693]: E0517 00:23:53.496774 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.496797 kubelet[2693]: W0517 00:23:53.496788 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.496928 kubelet[2693]: E0517 00:23:53.496843 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.497160 kubelet[2693]: E0517 00:23:53.497138 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.497160 kubelet[2693]: W0517 00:23:53.497155 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.497407 kubelet[2693]: E0517 00:23:53.497300 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.498159 kubelet[2693]: E0517 00:23:53.497616 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.498159 kubelet[2693]: W0517 00:23:53.497657 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.498159 kubelet[2693]: E0517 00:23:53.497752 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.498344 kubelet[2693]: E0517 00:23:53.498306 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.498455 kubelet[2693]: W0517 00:23:53.498322 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.498455 kubelet[2693]: E0517 00:23:53.498384 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.498788 kubelet[2693]: E0517 00:23:53.498767 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.498788 kubelet[2693]: W0517 00:23:53.498784 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.498879 kubelet[2693]: E0517 00:23:53.498844 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.499126 kubelet[2693]: E0517 00:23:53.499077 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.499126 kubelet[2693]: W0517 00:23:53.499123 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.499230 kubelet[2693]: E0517 00:23:53.499210 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.499664 kubelet[2693]: E0517 00:23:53.499544 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.499664 kubelet[2693]: W0517 00:23:53.499557 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.499664 kubelet[2693]: E0517 00:23:53.499625 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.500058 kubelet[2693]: E0517 00:23:53.499937 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.500058 kubelet[2693]: W0517 00:23:53.499950 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.500161 kubelet[2693]: E0517 00:23:53.500060 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.500531 kubelet[2693]: E0517 00:23:53.500452 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.500531 kubelet[2693]: W0517 00:23:53.500465 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.500620 kubelet[2693]: E0517 00:23:53.500601 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.501123 kubelet[2693]: E0517 00:23:53.501000 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.501123 kubelet[2693]: W0517 00:23:53.501015 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.501208 kubelet[2693]: E0517 00:23:53.501196 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.503124 kubelet[2693]: E0517 00:23:53.501550 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.503124 kubelet[2693]: W0517 00:23:53.501567 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.503124 kubelet[2693]: E0517 00:23:53.501651 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.503124 kubelet[2693]: E0517 00:23:53.502010 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.503124 kubelet[2693]: W0517 00:23:53.502021 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.503124 kubelet[2693]: E0517 00:23:53.502237 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.503124 kubelet[2693]: E0517 00:23:53.502752 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.503124 kubelet[2693]: W0517 00:23:53.502763 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.504302 kubelet[2693]: E0517 00:23:53.503290 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.504302 kubelet[2693]: E0517 00:23:53.503407 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.504302 kubelet[2693]: W0517 00:23:53.503417 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.504302 kubelet[2693]: E0517 00:23:53.503737 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.506286 kubelet[2693]: E0517 00:23:53.504498 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.506286 kubelet[2693]: W0517 00:23:53.504526 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.506286 kubelet[2693]: E0517 00:23:53.504633 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.506286 kubelet[2693]: E0517 00:23:53.504962 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.506286 kubelet[2693]: W0517 00:23:53.504972 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.506286 kubelet[2693]: E0517 00:23:53.505090 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.506286 kubelet[2693]: E0517 00:23:53.505450 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.506286 kubelet[2693]: W0517 00:23:53.505460 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.506286 kubelet[2693]: E0517 00:23:53.505664 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.506286 kubelet[2693]: E0517 00:23:53.505725 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.506745 kubelet[2693]: W0517 00:23:53.505745 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.506745 kubelet[2693]: E0517 00:23:53.505775 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.506745 kubelet[2693]: E0517 00:23:53.506048 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.506745 kubelet[2693]: W0517 00:23:53.506059 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.506872 kubelet[2693]: E0517 00:23:53.506829 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.506989 kubelet[2693]: E0517 00:23:53.506968 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.506989 kubelet[2693]: W0517 00:23:53.506985 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.507066 kubelet[2693]: E0517 00:23:53.506998 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:53.513684 kubelet[2693]: E0517 00:23:53.513649 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:53.513684 kubelet[2693]: W0517 00:23:53.513672 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:53.513820 kubelet[2693]: E0517 00:23:53.513695 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:55.112982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3335335648.mount: Deactivated successfully. May 17 00:23:55.377170 kubelet[2693]: E0517 00:23:55.375501 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ht64s" podUID="79f08894-50f6-4f34-ab06-f713767f2567" May 17 00:23:56.190060 containerd[1504]: time="2025-05-17T00:23:56.189975415Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:56.191793 containerd[1504]: time="2025-05-17T00:23:56.191722750Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 17 00:23:56.193328 containerd[1504]: time="2025-05-17T00:23:56.193236861Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:56.201705 containerd[1504]: time="2025-05-17T00:23:56.201391995Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 2.933948072s" May 17 00:23:56.201705 containerd[1504]: time="2025-05-17T00:23:56.201468277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 17 00:23:56.202110 containerd[1504]: time="2025-05-17T00:23:56.202060066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:56.205123 containerd[1504]: time="2025-05-17T00:23:56.205093669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 00:23:56.243964 containerd[1504]: time="2025-05-17T00:23:56.243908546Z" level=info msg="CreateContainer within sandbox \"d96df9ae5b1d1169e6b5803ab4602328a814cbf035c97fda9b0df40e87264fd9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 17 00:23:56.258228 containerd[1504]: time="2025-05-17T00:23:56.258158267Z" level=info msg="CreateContainer within sandbox \"d96df9ae5b1d1169e6b5803ab4602328a814cbf035c97fda9b0df40e87264fd9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"207d36fbd209fe3e125ef3d40ae31c34972d7ffafe7abcce5884421a202bd6fa\"" May 17 00:23:56.262861 containerd[1504]: time="2025-05-17T00:23:56.259952410Z" level=info msg="StartContainer for \"207d36fbd209fe3e125ef3d40ae31c34972d7ffafe7abcce5884421a202bd6fa\"" May 17 00:23:56.300446 systemd[1]: Started cri-containerd-207d36fbd209fe3e125ef3d40ae31c34972d7ffafe7abcce5884421a202bd6fa.scope - libcontainer container 207d36fbd209fe3e125ef3d40ae31c34972d7ffafe7abcce5884421a202bd6fa. May 17 00:23:56.348696 containerd[1504]: time="2025-05-17T00:23:56.348628928Z" level=info msg="StartContainer for \"207d36fbd209fe3e125ef3d40ae31c34972d7ffafe7abcce5884421a202bd6fa\" returns successfully" May 17 00:23:56.494923 kubelet[2693]: E0517 00:23:56.494770 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.494923 kubelet[2693]: W0517 00:23:56.494815 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.494923 kubelet[2693]: E0517 00:23:56.494843 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:56.495403 kubelet[2693]: E0517 00:23:56.495057 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.495403 kubelet[2693]: W0517 00:23:56.495067 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.495403 kubelet[2693]: E0517 00:23:56.495077 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:56.495403 kubelet[2693]: E0517 00:23:56.495321 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.495403 kubelet[2693]: W0517 00:23:56.495329 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.495403 kubelet[2693]: E0517 00:23:56.495337 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:56.498618 kubelet[2693]: E0517 00:23:56.496440 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.498618 kubelet[2693]: W0517 00:23:56.496452 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.498618 kubelet[2693]: E0517 00:23:56.496464 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:56.498618 kubelet[2693]: E0517 00:23:56.496604 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.498618 kubelet[2693]: W0517 00:23:56.496610 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.498618 kubelet[2693]: E0517 00:23:56.496617 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:56.498618 kubelet[2693]: E0517 00:23:56.496755 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.498618 kubelet[2693]: W0517 00:23:56.496762 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.498618 kubelet[2693]: E0517 00:23:56.496769 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:56.498618 kubelet[2693]: E0517 00:23:56.496909 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.499094 kubelet[2693]: W0517 00:23:56.496916 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.499094 kubelet[2693]: E0517 00:23:56.496922 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:56.499094 kubelet[2693]: E0517 00:23:56.497045 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.499094 kubelet[2693]: W0517 00:23:56.497051 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.499094 kubelet[2693]: E0517 00:23:56.497057 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:56.499094 kubelet[2693]: E0517 00:23:56.497176 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.499094 kubelet[2693]: W0517 00:23:56.497183 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.499094 kubelet[2693]: E0517 00:23:56.497189 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:56.499094 kubelet[2693]: E0517 00:23:56.497348 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.499094 kubelet[2693]: W0517 00:23:56.497355 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.499704 kubelet[2693]: E0517 00:23:56.497362 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:56.499704 kubelet[2693]: E0517 00:23:56.498417 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.499704 kubelet[2693]: W0517 00:23:56.498427 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.499704 kubelet[2693]: E0517 00:23:56.498435 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:56.499704 kubelet[2693]: E0517 00:23:56.498548 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.499704 kubelet[2693]: W0517 00:23:56.498555 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.499704 kubelet[2693]: E0517 00:23:56.498562 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:56.499704 kubelet[2693]: E0517 00:23:56.498687 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.499704 kubelet[2693]: W0517 00:23:56.498695 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.499704 kubelet[2693]: E0517 00:23:56.498702 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:56.500333 kubelet[2693]: E0517 00:23:56.498947 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.500333 kubelet[2693]: W0517 00:23:56.498957 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.500333 kubelet[2693]: E0517 00:23:56.498967 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:56.500333 kubelet[2693]: E0517 00:23:56.499102 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.500333 kubelet[2693]: W0517 00:23:56.499110 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.500333 kubelet[2693]: E0517 00:23:56.499117 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:56.504698 kubelet[2693]: I0517 00:23:56.504635 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8d98699c-sf2j6" podStartSLOduration=1.5654643190000002 podStartE2EDuration="4.504621029s" podCreationTimestamp="2025-05-17 00:23:52 +0000 UTC" firstStartedPulling="2025-05-17 00:23:53.265686082 +0000 UTC m=+18.999237965" lastFinishedPulling="2025-05-17 00:23:56.204842793 +0000 UTC m=+21.938394675" observedRunningTime="2025-05-17 00:23:56.504385291 +0000 UTC m=+22.237937175" watchObservedRunningTime="2025-05-17 00:23:56.504621029 +0000 UTC m=+22.238172911" May 17 00:23:56.526865 kubelet[2693]: E0517 00:23:56.526817 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.526865 kubelet[2693]: W0517 00:23:56.526848 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.526865 kubelet[2693]: E0517 00:23:56.526871 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:56.528437 kubelet[2693]: E0517 00:23:56.528410 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.528437 kubelet[2693]: W0517 00:23:56.528431 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.528575 kubelet[2693]: E0517 00:23:56.528446 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:56.528735 kubelet[2693]: E0517 00:23:56.528696 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.528735 kubelet[2693]: W0517 00:23:56.528714 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.528735 kubelet[2693]: E0517 00:23:56.528732 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:56.529011 kubelet[2693]: E0517 00:23:56.528991 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.529011 kubelet[2693]: W0517 00:23:56.529007 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.529066 kubelet[2693]: E0517 00:23:56.529028 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:56.529285 kubelet[2693]: E0517 00:23:56.529255 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.529285 kubelet[2693]: W0517 00:23:56.529269 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.530345 kubelet[2693]: E0517 00:23:56.530306 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:56.530583 kubelet[2693]: E0517 00:23:56.530547 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.530583 kubelet[2693]: W0517 00:23:56.530562 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.530661 kubelet[2693]: E0517 00:23:56.530644 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:56.530778 kubelet[2693]: E0517 00:23:56.530759 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.530778 kubelet[2693]: W0517 00:23:56.530774 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.530877 kubelet[2693]: E0517 00:23:56.530858 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:56.530963 kubelet[2693]: E0517 00:23:56.530943 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.530963 kubelet[2693]: W0517 00:23:56.530956 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.531488 kubelet[2693]: E0517 00:23:56.531034 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:56.532338 kubelet[2693]: E0517 00:23:56.532315 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.532338 kubelet[2693]: W0517 00:23:56.532332 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.532396 kubelet[2693]: E0517 00:23:56.532348 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:56.532559 kubelet[2693]: E0517 00:23:56.532540 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.532559 kubelet[2693]: W0517 00:23:56.532555 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.532611 kubelet[2693]: E0517 00:23:56.532563 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:56.532917 kubelet[2693]: E0517 00:23:56.532897 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.532917 kubelet[2693]: W0517 00:23:56.532911 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.532963 kubelet[2693]: E0517 00:23:56.532918 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:56.533134 kubelet[2693]: E0517 00:23:56.533114 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.533134 kubelet[2693]: W0517 00:23:56.533129 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.533282 kubelet[2693]: E0517 00:23:56.533212 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:56.534487 kubelet[2693]: E0517 00:23:56.534466 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.534487 kubelet[2693]: W0517 00:23:56.534482 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.534546 kubelet[2693]: E0517 00:23:56.534496 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:56.534707 kubelet[2693]: E0517 00:23:56.534689 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.534707 kubelet[2693]: W0517 00:23:56.534703 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.534762 kubelet[2693]: E0517 00:23:56.534725 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:56.534936 kubelet[2693]: E0517 00:23:56.534912 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.534970 kubelet[2693]: W0517 00:23:56.534936 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.536357 kubelet[2693]: E0517 00:23:56.536332 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:56.536524 kubelet[2693]: E0517 00:23:56.536504 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.536524 kubelet[2693]: W0517 00:23:56.536520 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.536571 kubelet[2693]: E0517 00:23:56.536531 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:56.536756 kubelet[2693]: E0517 00:23:56.536737 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.536756 kubelet[2693]: W0517 00:23:56.536751 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.536815 kubelet[2693]: E0517 00:23:56.536759 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:56.537073 kubelet[2693]: E0517 00:23:56.537054 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:56.537073 kubelet[2693]: W0517 00:23:56.537068 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:56.537123 kubelet[2693]: E0517 00:23:56.537075 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.375025 kubelet[2693]: E0517 00:23:57.374959 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ht64s" podUID="79f08894-50f6-4f34-ab06-f713767f2567" May 17 00:23:57.454215 kubelet[2693]: I0517 00:23:57.454172 2693 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:23:57.505123 kubelet[2693]: E0517 00:23:57.505065 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.505123 kubelet[2693]: W0517 00:23:57.505098 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.505123 kubelet[2693]: E0517 00:23:57.505125 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.505822 kubelet[2693]: E0517 00:23:57.505354 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.505822 kubelet[2693]: W0517 00:23:57.505364 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.505822 kubelet[2693]: E0517 00:23:57.505375 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.505822 kubelet[2693]: E0517 00:23:57.505548 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.505822 kubelet[2693]: W0517 00:23:57.505557 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.505822 kubelet[2693]: E0517 00:23:57.505566 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.505822 kubelet[2693]: E0517 00:23:57.505724 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.505822 kubelet[2693]: W0517 00:23:57.505734 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.505822 kubelet[2693]: E0517 00:23:57.505744 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.506205 kubelet[2693]: E0517 00:23:57.505919 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.506205 kubelet[2693]: W0517 00:23:57.505930 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.506205 kubelet[2693]: E0517 00:23:57.505940 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.506205 kubelet[2693]: E0517 00:23:57.506096 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.506205 kubelet[2693]: W0517 00:23:57.506105 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.506205 kubelet[2693]: E0517 00:23:57.506114 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.506480 kubelet[2693]: E0517 00:23:57.506316 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.506480 kubelet[2693]: W0517 00:23:57.506325 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.506480 kubelet[2693]: E0517 00:23:57.506335 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.506587 kubelet[2693]: E0517 00:23:57.506491 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.506587 kubelet[2693]: W0517 00:23:57.506499 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.506587 kubelet[2693]: E0517 00:23:57.506507 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.506692 kubelet[2693]: E0517 00:23:57.506668 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.506692 kubelet[2693]: W0517 00:23:57.506676 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.506692 kubelet[2693]: E0517 00:23:57.506684 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.506843 kubelet[2693]: E0517 00:23:57.506823 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.506843 kubelet[2693]: W0517 00:23:57.506832 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.506906 kubelet[2693]: E0517 00:23:57.506847 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.506985 kubelet[2693]: E0517 00:23:57.506974 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.506985 kubelet[2693]: W0517 00:23:57.506984 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.507072 kubelet[2693]: E0517 00:23:57.506993 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.507138 kubelet[2693]: E0517 00:23:57.507128 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.507176 kubelet[2693]: W0517 00:23:57.507139 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.507176 kubelet[2693]: E0517 00:23:57.507147 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.507367 kubelet[2693]: E0517 00:23:57.507349 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.507367 kubelet[2693]: W0517 00:23:57.507361 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.507458 kubelet[2693]: E0517 00:23:57.507369 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.507539 kubelet[2693]: E0517 00:23:57.507518 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.507539 kubelet[2693]: W0517 00:23:57.507531 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.507740 kubelet[2693]: E0517 00:23:57.507539 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.507740 kubelet[2693]: E0517 00:23:57.507695 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.507740 kubelet[2693]: W0517 00:23:57.507706 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.507740 kubelet[2693]: E0517 00:23:57.507715 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.536111 kubelet[2693]: E0517 00:23:57.536063 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.536111 kubelet[2693]: W0517 00:23:57.536098 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.536367 kubelet[2693]: E0517 00:23:57.536126 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.536590 kubelet[2693]: E0517 00:23:57.536556 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.536590 kubelet[2693]: W0517 00:23:57.536572 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.536671 kubelet[2693]: E0517 00:23:57.536602 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.536900 kubelet[2693]: E0517 00:23:57.536879 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.536900 kubelet[2693]: W0517 00:23:57.536894 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.536984 kubelet[2693]: E0517 00:23:57.536910 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.537234 kubelet[2693]: E0517 00:23:57.537218 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.537234 kubelet[2693]: W0517 00:23:57.537231 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.537398 kubelet[2693]: E0517 00:23:57.537298 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.537644 kubelet[2693]: E0517 00:23:57.537557 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.537644 kubelet[2693]: W0517 00:23:57.537570 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.537768 kubelet[2693]: E0517 00:23:57.537733 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.537840 kubelet[2693]: E0517 00:23:57.537802 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.537840 kubelet[2693]: W0517 00:23:57.537817 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.537937 kubelet[2693]: E0517 00:23:57.537908 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.538160 kubelet[2693]: E0517 00:23:57.538139 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.538160 kubelet[2693]: W0517 00:23:57.538153 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.538310 kubelet[2693]: E0517 00:23:57.538229 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.538846 kubelet[2693]: E0517 00:23:57.538710 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.538846 kubelet[2693]: W0517 00:23:57.538726 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.538846 kubelet[2693]: E0517 00:23:57.538738 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.539085 kubelet[2693]: E0517 00:23:57.539073 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.539277 kubelet[2693]: W0517 00:23:57.539143 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.539596 kubelet[2693]: E0517 00:23:57.539530 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.539596 kubelet[2693]: W0517 00:23:57.539542 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.539967 kubelet[2693]: E0517 00:23:57.539838 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.539967 kubelet[2693]: W0517 00:23:57.539851 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.539967 kubelet[2693]: E0517 00:23:57.539886 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.540446 kubelet[2693]: E0517 00:23:57.540268 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.540775 kubelet[2693]: E0517 00:23:57.540645 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.541048 kubelet[2693]: W0517 00:23:57.540292 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.541048 kubelet[2693]: E0517 00:23:57.541043 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.541312 kubelet[2693]: E0517 00:23:57.541297 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.541720 kubelet[2693]: E0517 00:23:57.541370 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.541960 kubelet[2693]: W0517 00:23:57.541806 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.541960 kubelet[2693]: E0517 00:23:57.541832 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.542546 kubelet[2693]: E0517 00:23:57.542478 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.542546 kubelet[2693]: W0517 00:23:57.542491 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.542546 kubelet[2693]: E0517 00:23:57.542503 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.543638 kubelet[2693]: E0517 00:23:57.543528 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.543638 kubelet[2693]: W0517 00:23:57.543559 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.543638 kubelet[2693]: E0517 00:23:57.543571 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.544045 kubelet[2693]: E0517 00:23:57.543922 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.544045 kubelet[2693]: W0517 00:23:57.543934 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.544045 kubelet[2693]: E0517 00:23:57.543965 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.544403 kubelet[2693]: E0517 00:23:57.544335 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.544403 kubelet[2693]: W0517 00:23:57.544347 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.544403 kubelet[2693]: E0517 00:23:57.544358 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:57.546908 kubelet[2693]: E0517 00:23:57.546400 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:57.546908 kubelet[2693]: W0517 00:23:57.546413 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:57.546908 kubelet[2693]: E0517 00:23:57.546436 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:58.028814 containerd[1504]: time="2025-05-17T00:23:58.028546022Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:58.030125 containerd[1504]: time="2025-05-17T00:23:58.029561208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 17 00:23:58.034038 containerd[1504]: time="2025-05-17T00:23:58.034006537Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:58.037315 containerd[1504]: time="2025-05-17T00:23:58.037143736Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:58.037839 containerd[1504]: time="2025-05-17T00:23:58.037796278Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 1.832427077s" May 17 00:23:58.037988 containerd[1504]: time="2025-05-17T00:23:58.037913806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 17 00:23:58.042403 containerd[1504]: time="2025-05-17T00:23:58.042374223Z" level=info msg="CreateContainer within sandbox \"2186c3b0d493fdcb833e738efe867bab62816ecb2773d0523b700c8601122b31\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 00:23:58.058663 containerd[1504]: time="2025-05-17T00:23:58.058443017Z" level=info msg="CreateContainer within sandbox \"2186c3b0d493fdcb833e738efe867bab62816ecb2773d0523b700c8601122b31\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"74d63bcacb834bfd0ea5689ffbcfc0af70f95ad6e8e5bfa52ff9f81bfddd485b\"" May 17 00:23:58.059381 containerd[1504]: time="2025-05-17T00:23:58.059336096Z" level=info msg="StartContainer for \"74d63bcacb834bfd0ea5689ffbcfc0af70f95ad6e8e5bfa52ff9f81bfddd485b\"" May 17 00:23:58.120866 systemd[1]: Started cri-containerd-74d63bcacb834bfd0ea5689ffbcfc0af70f95ad6e8e5bfa52ff9f81bfddd485b.scope - libcontainer container 74d63bcacb834bfd0ea5689ffbcfc0af70f95ad6e8e5bfa52ff9f81bfddd485b. May 17 00:23:58.152545 containerd[1504]: time="2025-05-17T00:23:58.152473211Z" level=info msg="StartContainer for \"74d63bcacb834bfd0ea5689ffbcfc0af70f95ad6e8e5bfa52ff9f81bfddd485b\" returns successfully" May 17 00:23:58.170135 systemd[1]: cri-containerd-74d63bcacb834bfd0ea5689ffbcfc0af70f95ad6e8e5bfa52ff9f81bfddd485b.scope: Deactivated successfully. May 17 00:23:58.197557 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74d63bcacb834bfd0ea5689ffbcfc0af70f95ad6e8e5bfa52ff9f81bfddd485b-rootfs.mount: Deactivated successfully. May 17 00:23:58.228061 containerd[1504]: time="2025-05-17T00:23:58.205016568Z" level=info msg="shim disconnected" id=74d63bcacb834bfd0ea5689ffbcfc0af70f95ad6e8e5bfa52ff9f81bfddd485b namespace=k8s.io May 17 00:23:58.228218 containerd[1504]: time="2025-05-17T00:23:58.228070990Z" level=warning msg="cleaning up after shim disconnected" id=74d63bcacb834bfd0ea5689ffbcfc0af70f95ad6e8e5bfa52ff9f81bfddd485b namespace=k8s.io May 17 00:23:58.228218 containerd[1504]: time="2025-05-17T00:23:58.228088402Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:23:58.462787 containerd[1504]: time="2025-05-17T00:23:58.462723409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 00:23:59.374999 kubelet[2693]: E0517 00:23:59.374908 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ht64s" podUID="79f08894-50f6-4f34-ab06-f713767f2567" May 17 00:24:00.861780 kubelet[2693]: I0517 00:24:00.861377 2693 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:24:01.374592 kubelet[2693]: E0517 00:24:01.374536 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ht64s" podUID="79f08894-50f6-4f34-ab06-f713767f2567" May 17 00:24:02.610375 containerd[1504]: time="2025-05-17T00:24:02.610315086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:02.611386 containerd[1504]: time="2025-05-17T00:24:02.611345331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 17 00:24:02.612259 containerd[1504]: time="2025-05-17T00:24:02.612206643Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:02.614164 containerd[1504]: time="2025-05-17T00:24:02.614128477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:02.614954 containerd[1504]: time="2025-05-17T00:24:02.614503294Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 4.151738107s" May 17 00:24:02.614954 containerd[1504]: time="2025-05-17T00:24:02.614526196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 17 00:24:02.617730 containerd[1504]: time="2025-05-17T00:24:02.617701701Z" level=info msg="CreateContainer within sandbox \"2186c3b0d493fdcb833e738efe867bab62816ecb2773d0523b700c8601122b31\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 00:24:02.668639 containerd[1504]: time="2025-05-17T00:24:02.668577744Z" level=info msg="CreateContainer within sandbox \"2186c3b0d493fdcb833e738efe867bab62816ecb2773d0523b700c8601122b31\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a92422ee6bd63a54ecd96f296e4ee32ea121b2678ec22cd82627a62b112750d8\"" May 17 00:24:02.669657 containerd[1504]: time="2025-05-17T00:24:02.669442291Z" level=info msg="StartContainer for \"a92422ee6bd63a54ecd96f296e4ee32ea121b2678ec22cd82627a62b112750d8\"" May 17 00:24:02.702421 systemd[1]: Started cri-containerd-a92422ee6bd63a54ecd96f296e4ee32ea121b2678ec22cd82627a62b112750d8.scope - libcontainer container a92422ee6bd63a54ecd96f296e4ee32ea121b2678ec22cd82627a62b112750d8. May 17 00:24:02.734136 containerd[1504]: time="2025-05-17T00:24:02.733490428Z" level=info msg="StartContainer for \"a92422ee6bd63a54ecd96f296e4ee32ea121b2678ec22cd82627a62b112750d8\" returns successfully" May 17 00:24:03.152892 systemd[1]: cri-containerd-a92422ee6bd63a54ecd96f296e4ee32ea121b2678ec22cd82627a62b112750d8.scope: Deactivated successfully. May 17 00:24:03.178221 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a92422ee6bd63a54ecd96f296e4ee32ea121b2678ec22cd82627a62b112750d8-rootfs.mount: Deactivated successfully. May 17 00:24:03.184270 containerd[1504]: time="2025-05-17T00:24:03.184168129Z" level=info msg="shim disconnected" id=a92422ee6bd63a54ecd96f296e4ee32ea121b2678ec22cd82627a62b112750d8 namespace=k8s.io May 17 00:24:03.184380 containerd[1504]: time="2025-05-17T00:24:03.184276692Z" level=warning msg="cleaning up after shim disconnected" id=a92422ee6bd63a54ecd96f296e4ee32ea121b2678ec22cd82627a62b112750d8 namespace=k8s.io May 17 00:24:03.184380 containerd[1504]: time="2025-05-17T00:24:03.184287611Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:24:03.237266 kubelet[2693]: I0517 00:24:03.237218 2693 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 17 00:24:03.277348 systemd[1]: Created slice kubepods-burstable-pod141e29e6_7c60_4ef0_8843_86313045c72f.slice - libcontainer container kubepods-burstable-pod141e29e6_7c60_4ef0_8843_86313045c72f.slice. May 17 00:24:03.292702 systemd[1]: Created slice kubepods-besteffort-pod0e2dc52f_271c_43c5_9af2_6be78554f3c4.slice - libcontainer container kubepods-besteffort-pod0e2dc52f_271c_43c5_9af2_6be78554f3c4.slice. May 17 00:24:03.303338 systemd[1]: Created slice kubepods-besteffort-pod740cc25f_7011_4caa_a72b_ea2dc86d85ee.slice - libcontainer container kubepods-besteffort-pod740cc25f_7011_4caa_a72b_ea2dc86d85ee.slice. May 17 00:24:03.308952 systemd[1]: Created slice kubepods-besteffort-pod1bde9b24_cd69_4946_af9c_950fec8a6c4b.slice - libcontainer container kubepods-besteffort-pod1bde9b24_cd69_4946_af9c_950fec8a6c4b.slice. May 17 00:24:03.316726 systemd[1]: Created slice kubepods-besteffort-pod428ec0d8_1aeb_46da_b77e_f1dfa05702b0.slice - libcontainer container kubepods-besteffort-pod428ec0d8_1aeb_46da_b77e_f1dfa05702b0.slice. May 17 00:24:03.324665 systemd[1]: Created slice kubepods-burstable-pod30984a25_4953_4a27_9699_f4c7434a26ed.slice - libcontainer container kubepods-burstable-pod30984a25_4953_4a27_9699_f4c7434a26ed.slice. May 17 00:24:03.329271 systemd[1]: Created slice kubepods-besteffort-pod114d0358_ddcf_4c04_bb03_89411102b031.slice - libcontainer container kubepods-besteffort-pod114d0358_ddcf_4c04_bb03_89411102b031.slice. May 17 00:24:03.379405 systemd[1]: Created slice kubepods-besteffort-pod79f08894_50f6_4f34_ab06_f713767f2567.slice - libcontainer container kubepods-besteffort-pod79f08894_50f6_4f34_ab06_f713767f2567.slice. May 17 00:24:03.381848 containerd[1504]: time="2025-05-17T00:24:03.381820844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ht64s,Uid:79f08894-50f6-4f34-ab06-f713767f2567,Namespace:calico-system,Attempt:0,}" May 17 00:24:03.384403 kubelet[2693]: I0517 00:24:03.384209 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsfv7\" (UniqueName: \"kubernetes.io/projected/428ec0d8-1aeb-46da-b77e-f1dfa05702b0-kube-api-access-wsfv7\") pod \"calico-kube-controllers-8658f94dbd-xvh74\" (UID: \"428ec0d8-1aeb-46da-b77e-f1dfa05702b0\") " pod="calico-system/calico-kube-controllers-8658f94dbd-xvh74" May 17 00:24:03.384531 kubelet[2693]: I0517 00:24:03.384421 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bde9b24-cd69-4946-af9c-950fec8a6c4b-config\") pod \"goldmane-78d55f7ddc-9tht4\" (UID: \"1bde9b24-cd69-4946-af9c-950fec8a6c4b\") " pod="calico-system/goldmane-78d55f7ddc-9tht4" May 17 00:24:03.384631 kubelet[2693]: I0517 00:24:03.384604 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bde9b24-cd69-4946-af9c-950fec8a6c4b-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-9tht4\" (UID: \"1bde9b24-cd69-4946-af9c-950fec8a6c4b\") " pod="calico-system/goldmane-78d55f7ddc-9tht4" May 17 00:24:03.384689 kubelet[2693]: I0517 00:24:03.384635 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc5qt\" (UniqueName: \"kubernetes.io/projected/1bde9b24-cd69-4946-af9c-950fec8a6c4b-kube-api-access-hc5qt\") pod \"goldmane-78d55f7ddc-9tht4\" (UID: \"1bde9b24-cd69-4946-af9c-950fec8a6c4b\") " pod="calico-system/goldmane-78d55f7ddc-9tht4" May 17 00:24:03.384689 kubelet[2693]: I0517 00:24:03.384675 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djlth\" (UniqueName: \"kubernetes.io/projected/740cc25f-7011-4caa-a72b-ea2dc86d85ee-kube-api-access-djlth\") pod \"whisker-955f4745b-jt295\" (UID: \"740cc25f-7011-4caa-a72b-ea2dc86d85ee\") " pod="calico-system/whisker-955f4745b-jt295" May 17 00:24:03.384793 kubelet[2693]: I0517 00:24:03.384693 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0e2dc52f-271c-43c5-9af2-6be78554f3c4-calico-apiserver-certs\") pod \"calico-apiserver-555577f7d7-4kjkb\" (UID: \"0e2dc52f-271c-43c5-9af2-6be78554f3c4\") " pod="calico-apiserver/calico-apiserver-555577f7d7-4kjkb" May 17 00:24:03.384951 kubelet[2693]: I0517 00:24:03.384932 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/740cc25f-7011-4caa-a72b-ea2dc86d85ee-whisker-ca-bundle\") pod \"whisker-955f4745b-jt295\" (UID: \"740cc25f-7011-4caa-a72b-ea2dc86d85ee\") " pod="calico-system/whisker-955f4745b-jt295" May 17 00:24:03.384988 kubelet[2693]: I0517 00:24:03.384971 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kmdq\" (UniqueName: \"kubernetes.io/projected/141e29e6-7c60-4ef0-8843-86313045c72f-kube-api-access-8kmdq\") pod \"coredns-668d6bf9bc-8bdt9\" (UID: \"141e29e6-7c60-4ef0-8843-86313045c72f\") " pod="kube-system/coredns-668d6bf9bc-8bdt9" May 17 00:24:03.385025 kubelet[2693]: I0517 00:24:03.384998 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30984a25-4953-4a27-9699-f4c7434a26ed-config-volume\") pod \"coredns-668d6bf9bc-2n42x\" (UID: \"30984a25-4953-4a27-9699-f4c7434a26ed\") " pod="kube-system/coredns-668d6bf9bc-2n42x" May 17 00:24:03.385215 kubelet[2693]: I0517 00:24:03.385188 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/141e29e6-7c60-4ef0-8843-86313045c72f-config-volume\") pod \"coredns-668d6bf9bc-8bdt9\" (UID: \"141e29e6-7c60-4ef0-8843-86313045c72f\") " pod="kube-system/coredns-668d6bf9bc-8bdt9" May 17 00:24:03.385215 kubelet[2693]: I0517 00:24:03.385217 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lnc5\" (UniqueName: \"kubernetes.io/projected/0e2dc52f-271c-43c5-9af2-6be78554f3c4-kube-api-access-4lnc5\") pod \"calico-apiserver-555577f7d7-4kjkb\" (UID: \"0e2dc52f-271c-43c5-9af2-6be78554f3c4\") " pod="calico-apiserver/calico-apiserver-555577f7d7-4kjkb" May 17 00:24:03.385309 kubelet[2693]: I0517 00:24:03.385252 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1bde9b24-cd69-4946-af9c-950fec8a6c4b-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-9tht4\" (UID: \"1bde9b24-cd69-4946-af9c-950fec8a6c4b\") " pod="calico-system/goldmane-78d55f7ddc-9tht4" May 17 00:24:03.385409 kubelet[2693]: I0517 00:24:03.385386 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8k5t\" (UniqueName: \"kubernetes.io/projected/30984a25-4953-4a27-9699-f4c7434a26ed-kube-api-access-q8k5t\") pod \"coredns-668d6bf9bc-2n42x\" (UID: \"30984a25-4953-4a27-9699-f4c7434a26ed\") " pod="kube-system/coredns-668d6bf9bc-2n42x" May 17 00:24:03.385409 kubelet[2693]: I0517 00:24:03.385415 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/740cc25f-7011-4caa-a72b-ea2dc86d85ee-whisker-backend-key-pair\") pod \"whisker-955f4745b-jt295\" (UID: \"740cc25f-7011-4caa-a72b-ea2dc86d85ee\") " pod="calico-system/whisker-955f4745b-jt295" May 17 00:24:03.385887 kubelet[2693]: I0517 00:24:03.385431 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/428ec0d8-1aeb-46da-b77e-f1dfa05702b0-tigera-ca-bundle\") pod \"calico-kube-controllers-8658f94dbd-xvh74\" (UID: \"428ec0d8-1aeb-46da-b77e-f1dfa05702b0\") " pod="calico-system/calico-kube-controllers-8658f94dbd-xvh74" May 17 00:24:03.385887 kubelet[2693]: I0517 00:24:03.385571 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/114d0358-ddcf-4c04-bb03-89411102b031-calico-apiserver-certs\") pod \"calico-apiserver-555577f7d7-qfnvl\" (UID: \"114d0358-ddcf-4c04-bb03-89411102b031\") " pod="calico-apiserver/calico-apiserver-555577f7d7-qfnvl" May 17 00:24:03.385887 kubelet[2693]: I0517 00:24:03.385591 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glfp6\" (UniqueName: \"kubernetes.io/projected/114d0358-ddcf-4c04-bb03-89411102b031-kube-api-access-glfp6\") pod \"calico-apiserver-555577f7d7-qfnvl\" (UID: \"114d0358-ddcf-4c04-bb03-89411102b031\") " pod="calico-apiserver/calico-apiserver-555577f7d7-qfnvl" May 17 00:24:03.477782 containerd[1504]: time="2025-05-17T00:24:03.477439230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 00:24:03.583473 containerd[1504]: time="2025-05-17T00:24:03.583437270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8bdt9,Uid:141e29e6-7c60-4ef0-8843-86313045c72f,Namespace:kube-system,Attempt:0,}" May 17 00:24:03.589674 containerd[1504]: time="2025-05-17T00:24:03.589124958Z" level=error msg="Failed to destroy network for sandbox \"3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:03.593261 containerd[1504]: time="2025-05-17T00:24:03.593199417Z" level=error msg="encountered an error cleaning up failed sandbox \"3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:03.593346 containerd[1504]: time="2025-05-17T00:24:03.593294452Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ht64s,Uid:79f08894-50f6-4f34-ab06-f713767f2567,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:03.595363 kubelet[2693]: E0517 00:24:03.594877 2693 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:03.595363 kubelet[2693]: E0517 00:24:03.594938 2693 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ht64s" May 17 00:24:03.595363 kubelet[2693]: E0517 00:24:03.594958 2693 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ht64s" May 17 00:24:03.595673 kubelet[2693]: E0517 00:24:03.595000 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ht64s_calico-system(79f08894-50f6-4f34-ab06-f713767f2567)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ht64s_calico-system(79f08894-50f6-4f34-ab06-f713767f2567)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ht64s" podUID="79f08894-50f6-4f34-ab06-f713767f2567" May 17 00:24:03.599111 containerd[1504]: time="2025-05-17T00:24:03.599088570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-555577f7d7-4kjkb,Uid:0e2dc52f-271c-43c5-9af2-6be78554f3c4,Namespace:calico-apiserver,Attempt:0,}" May 17 00:24:03.612915 containerd[1504]: time="2025-05-17T00:24:03.612690136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-955f4745b-jt295,Uid:740cc25f-7011-4caa-a72b-ea2dc86d85ee,Namespace:calico-system,Attempt:0,}" May 17 00:24:03.614413 containerd[1504]: time="2025-05-17T00:24:03.614394295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-9tht4,Uid:1bde9b24-cd69-4946-af9c-950fec8a6c4b,Namespace:calico-system,Attempt:0,}" May 17 00:24:03.622302 containerd[1504]: time="2025-05-17T00:24:03.622257320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8658f94dbd-xvh74,Uid:428ec0d8-1aeb-46da-b77e-f1dfa05702b0,Namespace:calico-system,Attempt:0,}" May 17 00:24:03.634728 containerd[1504]: time="2025-05-17T00:24:03.634467077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2n42x,Uid:30984a25-4953-4a27-9699-f4c7434a26ed,Namespace:kube-system,Attempt:0,}" May 17 00:24:03.637513 containerd[1504]: time="2025-05-17T00:24:03.637348166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-555577f7d7-qfnvl,Uid:114d0358-ddcf-4c04-bb03-89411102b031,Namespace:calico-apiserver,Attempt:0,}" May 17 00:24:03.694951 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316-shm.mount: Deactivated successfully. May 17 00:24:03.735358 containerd[1504]: time="2025-05-17T00:24:03.734725233Z" level=error msg="Failed to destroy network for sandbox \"8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:03.737112 containerd[1504]: time="2025-05-17T00:24:03.736977271Z" level=error msg="encountered an error cleaning up failed sandbox \"8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:03.738389 containerd[1504]: time="2025-05-17T00:24:03.738283079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8bdt9,Uid:141e29e6-7c60-4ef0-8843-86313045c72f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:03.739384 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e-shm.mount: Deactivated successfully. May 17 00:24:03.744472 kubelet[2693]: E0517 00:24:03.744080 2693 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:03.744472 kubelet[2693]: E0517 00:24:03.744190 2693 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8bdt9" May 17 00:24:03.744472 kubelet[2693]: E0517 00:24:03.744222 2693 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8bdt9" May 17 00:24:03.745983 kubelet[2693]: E0517 00:24:03.744436 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-8bdt9_kube-system(141e29e6-7c60-4ef0-8843-86313045c72f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-8bdt9_kube-system(141e29e6-7c60-4ef0-8843-86313045c72f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8bdt9" podUID="141e29e6-7c60-4ef0-8843-86313045c72f" May 17 00:24:03.811306 containerd[1504]: time="2025-05-17T00:24:03.811204683Z" level=error msg="Failed to destroy network for sandbox \"97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:03.812439 containerd[1504]: time="2025-05-17T00:24:03.812016332Z" level=error msg="encountered an error cleaning up failed sandbox \"97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:03.812439 containerd[1504]: time="2025-05-17T00:24:03.812070703Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-955f4745b-jt295,Uid:740cc25f-7011-4caa-a72b-ea2dc86d85ee,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:03.812529 kubelet[2693]: E0517 00:24:03.812314 2693 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:03.812529 kubelet[2693]: E0517 00:24:03.812396 2693 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-955f4745b-jt295" May 17 00:24:03.812794 kubelet[2693]: E0517 00:24:03.812419 2693 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-955f4745b-jt295" May 17 00:24:03.812794 kubelet[2693]: E0517 00:24:03.812720 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-955f4745b-jt295_calico-system(740cc25f-7011-4caa-a72b-ea2dc86d85ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-955f4745b-jt295_calico-system(740cc25f-7011-4caa-a72b-ea2dc86d85ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-955f4745b-jt295" podUID="740cc25f-7011-4caa-a72b-ea2dc86d85ee" May 17 00:24:03.822522 containerd[1504]: time="2025-05-17T00:24:03.822218327Z" level=error msg="Failed to destroy network for sandbox \"bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:03.822646 containerd[1504]: time="2025-05-17T00:24:03.822567876Z" level=error msg="encountered an error cleaning up failed sandbox \"bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:03.822646 containerd[1504]: time="2025-05-17T00:24:03.822634660Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-555577f7d7-4kjkb,Uid:0e2dc52f-271c-43c5-9af2-6be78554f3c4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:03.823275 kubelet[2693]: E0517 00:24:03.823165 2693 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:03.823402 kubelet[2693]: E0517 00:24:03.823234 2693 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-555577f7d7-4kjkb" May 17 00:24:03.823402 kubelet[2693]: E0517 00:24:03.823364 2693 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-555577f7d7-4kjkb" May 17 00:24:03.824550 kubelet[2693]: E0517 00:24:03.823711 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-555577f7d7-4kjkb_calico-apiserver(0e2dc52f-271c-43c5-9af2-6be78554f3c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-555577f7d7-4kjkb_calico-apiserver(0e2dc52f-271c-43c5-9af2-6be78554f3c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-555577f7d7-4kjkb" podUID="0e2dc52f-271c-43c5-9af2-6be78554f3c4" May 17 00:24:03.844726 containerd[1504]: time="2025-05-17T00:24:03.844358654Z" level=error msg="Failed to destroy network for sandbox \"c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:03.844726 containerd[1504]: time="2025-05-17T00:24:03.844657600Z" level=error msg="encountered an error cleaning up failed sandbox \"c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:03.844726 containerd[1504]: time="2025-05-17T00:24:03.844728302Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2n42x,Uid:30984a25-4953-4a27-9699-f4c7434a26ed,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:03.845006 kubelet[2693]: E0517 00:24:03.844947 2693 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:03.845133 kubelet[2693]: E0517 00:24:03.845012 2693 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2n42x" May 17 00:24:03.845133 kubelet[2693]: E0517 00:24:03.845039 2693 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2n42x" May 17 00:24:03.846809 kubelet[2693]: E0517 00:24:03.845118 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2n42x_kube-system(30984a25-4953-4a27-9699-f4c7434a26ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2n42x_kube-system(30984a25-4953-4a27-9699-f4c7434a26ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2n42x" podUID="30984a25-4953-4a27-9699-f4c7434a26ed" May 17 00:24:03.869063 containerd[1504]: time="2025-05-17T00:24:03.869018528Z" level=error msg="Failed to destroy network for sandbox \"dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:03.870897 containerd[1504]: time="2025-05-17T00:24:03.870866044Z" level=error msg="encountered an error cleaning up failed sandbox \"dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:03.871142 containerd[1504]: time="2025-05-17T00:24:03.871036832Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-9tht4,Uid:1bde9b24-cd69-4946-af9c-950fec8a6c4b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:03.871409 kubelet[2693]: E0517 00:24:03.871342 2693 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:03.872004 kubelet[2693]: E0517 00:24:03.871448 2693 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-9tht4" May 17 00:24:03.872004 kubelet[2693]: E0517 00:24:03.871475 2693 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-9tht4" May 17 00:24:03.872004 kubelet[2693]: E0517 00:24:03.871528 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-9tht4_calico-system(1bde9b24-cd69-4946-af9c-950fec8a6c4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-9tht4_calico-system(1bde9b24-cd69-4946-af9c-950fec8a6c4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-9tht4" podUID="1bde9b24-cd69-4946-af9c-950fec8a6c4b" May 17 00:24:03.890727 containerd[1504]: time="2025-05-17T00:24:03.890670678Z" level=error msg="Failed to destroy network for sandbox \"e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:03.892845 containerd[1504]: time="2025-05-17T00:24:03.891492726Z" level=error msg="encountered an error cleaning up failed sandbox \"e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:03.892845 containerd[1504]: time="2025-05-17T00:24:03.891561695Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8658f94dbd-xvh74,Uid:428ec0d8-1aeb-46da-b77e-f1dfa05702b0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:03.892995 kubelet[2693]: E0517 00:24:03.891916 2693 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:03.892995 kubelet[2693]: E0517 00:24:03.891983 2693 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8658f94dbd-xvh74" May 17 00:24:03.892995 kubelet[2693]: E0517 00:24:03.892000 2693 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8658f94dbd-xvh74" May 17 00:24:03.893067 kubelet[2693]: E0517 00:24:03.892034 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8658f94dbd-xvh74_calico-system(428ec0d8-1aeb-46da-b77e-f1dfa05702b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8658f94dbd-xvh74_calico-system(428ec0d8-1aeb-46da-b77e-f1dfa05702b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8658f94dbd-xvh74" podUID="428ec0d8-1aeb-46da-b77e-f1dfa05702b0" May 17 00:24:03.898567 containerd[1504]: time="2025-05-17T00:24:03.898523603Z" level=error msg="Failed to destroy network for sandbox \"fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:03.898867 containerd[1504]: time="2025-05-17T00:24:03.898821015Z" level=error msg="encountered an error cleaning up failed sandbox \"fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:03.898906 containerd[1504]: time="2025-05-17T00:24:03.898871629Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-555577f7d7-qfnvl,Uid:114d0358-ddcf-4c04-bb03-89411102b031,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:03.899503 kubelet[2693]: E0517 00:24:03.899079 2693 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:03.899503 kubelet[2693]: E0517 00:24:03.899134 2693 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-555577f7d7-qfnvl" May 17 00:24:03.899503 kubelet[2693]: E0517 00:24:03.899167 2693 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-555577f7d7-qfnvl" May 17 00:24:03.899677 kubelet[2693]: E0517 00:24:03.899217 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-555577f7d7-qfnvl_calico-apiserver(114d0358-ddcf-4c04-bb03-89411102b031)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-555577f7d7-qfnvl_calico-apiserver(114d0358-ddcf-4c04-bb03-89411102b031)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-555577f7d7-qfnvl" podUID="114d0358-ddcf-4c04-bb03-89411102b031" May 17 00:24:04.480398 kubelet[2693]: I0517 00:24:04.480329 2693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" May 17 00:24:04.485671 kubelet[2693]: I0517 00:24:04.483613 2693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" May 17 00:24:04.490083 containerd[1504]: time="2025-05-17T00:24:04.489920589Z" level=info msg="StopPodSandbox for \"8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e\"" May 17 00:24:04.492395 containerd[1504]: time="2025-05-17T00:24:04.490795286Z" level=info msg="StopPodSandbox for \"3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316\"" May 17 00:24:04.492395 containerd[1504]: time="2025-05-17T00:24:04.492312819Z" level=info msg="Ensure that sandbox 3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316 in task-service has been cleanup successfully" May 17 00:24:04.492740 containerd[1504]: time="2025-05-17T00:24:04.492595495Z" level=info msg="Ensure that sandbox 8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e in task-service has been cleanup successfully" May 17 00:24:04.501920 kubelet[2693]: I0517 00:24:04.501744 2693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" May 17 00:24:04.506296 containerd[1504]: time="2025-05-17T00:24:04.506223676Z" level=info msg="StopPodSandbox for \"c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8\"" May 17 00:24:04.508534 containerd[1504]: time="2025-05-17T00:24:04.508497466Z" level=info msg="Ensure that sandbox c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8 in task-service has been cleanup successfully" May 17 00:24:04.513111 kubelet[2693]: I0517 00:24:04.513026 2693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" May 17 00:24:04.514533 containerd[1504]: time="2025-05-17T00:24:04.514312343Z" level=info msg="StopPodSandbox for \"97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b\"" May 17 00:24:04.514533 containerd[1504]: time="2025-05-17T00:24:04.514487959Z" level=info msg="Ensure that sandbox 97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b in task-service has been cleanup successfully" May 17 00:24:04.519011 kubelet[2693]: I0517 00:24:04.518600 2693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" May 17 00:24:04.519654 containerd[1504]: time="2025-05-17T00:24:04.519629173Z" level=info msg="StopPodSandbox for \"e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812\"" May 17 00:24:04.519783 containerd[1504]: time="2025-05-17T00:24:04.519759245Z" level=info msg="Ensure that sandbox e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812 in task-service has been cleanup successfully" May 17 00:24:04.523799 kubelet[2693]: I0517 00:24:04.523781 2693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" May 17 00:24:04.525553 containerd[1504]: time="2025-05-17T00:24:04.524932469Z" level=info msg="StopPodSandbox for \"dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac\"" May 17 00:24:04.525983 containerd[1504]: time="2025-05-17T00:24:04.525507137Z" level=info msg="Ensure that sandbox dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac in task-service has been cleanup successfully" May 17 00:24:04.526665 kubelet[2693]: I0517 00:24:04.526319 2693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" May 17 00:24:04.526860 containerd[1504]: time="2025-05-17T00:24:04.526837081Z" level=info msg="StopPodSandbox for \"fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa\"" May 17 00:24:04.529271 containerd[1504]: time="2025-05-17T00:24:04.529223491Z" level=info msg="Ensure that sandbox fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa in task-service has been cleanup successfully" May 17 00:24:04.534956 kubelet[2693]: I0517 00:24:04.534617 2693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" May 17 00:24:04.535757 containerd[1504]: time="2025-05-17T00:24:04.535417803Z" level=info msg="StopPodSandbox for \"bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49\"" May 17 00:24:04.535757 containerd[1504]: time="2025-05-17T00:24:04.535563524Z" level=info msg="Ensure that sandbox bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49 in task-service has been cleanup successfully" May 17 00:24:04.616515 containerd[1504]: time="2025-05-17T00:24:04.616470919Z" level=error msg="StopPodSandbox for \"3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316\" failed" error="failed to destroy network for sandbox \"3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:04.617150 containerd[1504]: time="2025-05-17T00:24:04.616966131Z" level=error msg="StopPodSandbox for \"8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e\" failed" error="failed to destroy network for sandbox \"8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:04.617720 kubelet[2693]: E0517 00:24:04.617652 2693 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" May 17 00:24:04.617808 kubelet[2693]: E0517 00:24:04.617748 2693 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" May 17 00:24:04.617834 kubelet[2693]: E0517 00:24:04.617771 2693 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316"} May 17 00:24:04.617938 kubelet[2693]: E0517 00:24:04.617852 2693 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"79f08894-50f6-4f34-ab06-f713767f2567\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:24:04.617938 kubelet[2693]: E0517 00:24:04.617877 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"79f08894-50f6-4f34-ab06-f713767f2567\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ht64s" podUID="79f08894-50f6-4f34-ab06-f713767f2567" May 17 00:24:04.617938 kubelet[2693]: E0517 00:24:04.617906 2693 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e"} May 17 00:24:04.617938 kubelet[2693]: E0517 00:24:04.617924 2693 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"141e29e6-7c60-4ef0-8843-86313045c72f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:24:04.618119 kubelet[2693]: E0517 00:24:04.617938 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"141e29e6-7c60-4ef0-8843-86313045c72f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8bdt9" podUID="141e29e6-7c60-4ef0-8843-86313045c72f" May 17 00:24:04.620548 containerd[1504]: time="2025-05-17T00:24:04.620501147Z" level=error msg="StopPodSandbox for \"c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8\" failed" error="failed to destroy network for sandbox \"c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:04.620788 kubelet[2693]: E0517 00:24:04.620689 2693 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" May 17 00:24:04.620788 kubelet[2693]: E0517 00:24:04.620715 2693 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8"} May 17 00:24:04.620788 kubelet[2693]: E0517 00:24:04.620736 2693 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"30984a25-4953-4a27-9699-f4c7434a26ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:24:04.620788 kubelet[2693]: E0517 00:24:04.620765 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"30984a25-4953-4a27-9699-f4c7434a26ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2n42x" podUID="30984a25-4953-4a27-9699-f4c7434a26ed" May 17 00:24:04.621122 containerd[1504]: time="2025-05-17T00:24:04.620987752Z" level=error msg="StopPodSandbox for \"97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b\" failed" error="failed to destroy network for sandbox \"97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:04.621291 kubelet[2693]: E0517 00:24:04.621208 2693 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" May 17 00:24:04.622355 kubelet[2693]: E0517 00:24:04.621232 2693 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b"} May 17 00:24:04.622355 kubelet[2693]: E0517 00:24:04.622308 2693 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"740cc25f-7011-4caa-a72b-ea2dc86d85ee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:24:04.622355 kubelet[2693]: E0517 00:24:04.622326 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"740cc25f-7011-4caa-a72b-ea2dc86d85ee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-955f4745b-jt295" podUID="740cc25f-7011-4caa-a72b-ea2dc86d85ee" May 17 00:24:04.626956 containerd[1504]: time="2025-05-17T00:24:04.626561580Z" level=error msg="StopPodSandbox for \"dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac\" failed" error="failed to destroy network for sandbox \"dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:04.626956 containerd[1504]: time="2025-05-17T00:24:04.626593099Z" level=error msg="StopPodSandbox for \"bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49\" failed" error="failed to destroy network for sandbox \"bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:04.627037 kubelet[2693]: E0517 00:24:04.626704 2693 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" May 17 00:24:04.627037 kubelet[2693]: E0517 00:24:04.626740 2693 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49"} May 17 00:24:04.627037 kubelet[2693]: E0517 00:24:04.626763 2693 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0e2dc52f-271c-43c5-9af2-6be78554f3c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:24:04.627037 kubelet[2693]: E0517 00:24:04.626782 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0e2dc52f-271c-43c5-9af2-6be78554f3c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-555577f7d7-4kjkb" podUID="0e2dc52f-271c-43c5-9af2-6be78554f3c4" May 17 00:24:04.627166 kubelet[2693]: E0517 00:24:04.626803 2693 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" May 17 00:24:04.627166 kubelet[2693]: E0517 00:24:04.626817 2693 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac"} May 17 00:24:04.627166 kubelet[2693]: E0517 00:24:04.626832 2693 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1bde9b24-cd69-4946-af9c-950fec8a6c4b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:24:04.627166 kubelet[2693]: E0517 00:24:04.626846 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1bde9b24-cd69-4946-af9c-950fec8a6c4b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-9tht4" podUID="1bde9b24-cd69-4946-af9c-950fec8a6c4b" May 17 00:24:04.628038 containerd[1504]: time="2025-05-17T00:24:04.628018331Z" level=error msg="StopPodSandbox for \"fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa\" failed" error="failed to destroy network for sandbox \"fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:04.628335 kubelet[2693]: E0517 00:24:04.628185 2693 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" May 17 00:24:04.628335 kubelet[2693]: E0517 00:24:04.628211 2693 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa"} May 17 00:24:04.628335 kubelet[2693]: E0517 00:24:04.628231 2693 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"114d0358-ddcf-4c04-bb03-89411102b031\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:24:04.628335 kubelet[2693]: E0517 00:24:04.628315 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"114d0358-ddcf-4c04-bb03-89411102b031\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-555577f7d7-qfnvl" podUID="114d0358-ddcf-4c04-bb03-89411102b031" May 17 00:24:04.630212 containerd[1504]: time="2025-05-17T00:24:04.630179540Z" level=error msg="StopPodSandbox for \"e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812\" failed" error="failed to destroy network for sandbox \"e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:04.630357 kubelet[2693]: E0517 00:24:04.630326 2693 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" May 17 00:24:04.630400 kubelet[2693]: E0517 00:24:04.630356 2693 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812"} May 17 00:24:04.630400 kubelet[2693]: E0517 00:24:04.630379 2693 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"428ec0d8-1aeb-46da-b77e-f1dfa05702b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:24:04.630468 kubelet[2693]: E0517 00:24:04.630396 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"428ec0d8-1aeb-46da-b77e-f1dfa05702b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8658f94dbd-xvh74" podUID="428ec0d8-1aeb-46da-b77e-f1dfa05702b0" May 17 00:24:04.664882 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8-shm.mount: Deactivated successfully. May 17 00:24:04.664969 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812-shm.mount: Deactivated successfully. May 17 00:24:04.665018 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac-shm.mount: Deactivated successfully. May 17 00:24:04.665083 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b-shm.mount: Deactivated successfully. May 17 00:24:04.665177 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49-shm.mount: Deactivated successfully. May 17 00:24:11.038268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4051095578.mount: Deactivated successfully. May 17 00:24:11.111263 containerd[1504]: time="2025-05-17T00:24:11.107316621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:11.111263 containerd[1504]: time="2025-05-17T00:24:11.107263611Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 7.629751938s" May 17 00:24:11.111263 containerd[1504]: time="2025-05-17T00:24:11.110499328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 17 00:24:11.111263 containerd[1504]: time="2025-05-17T00:24:11.110596038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 17 00:24:11.138877 containerd[1504]: time="2025-05-17T00:24:11.138807341Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:11.139493 containerd[1504]: time="2025-05-17T00:24:11.139463553Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:11.185092 containerd[1504]: time="2025-05-17T00:24:11.185040498Z" level=info msg="CreateContainer within sandbox \"2186c3b0d493fdcb833e738efe867bab62816ecb2773d0523b700c8601122b31\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 00:24:11.298696 containerd[1504]: time="2025-05-17T00:24:11.298579970Z" level=info msg="CreateContainer within sandbox \"2186c3b0d493fdcb833e738efe867bab62816ecb2773d0523b700c8601122b31\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d366aad508f442c8e3e79b8161aface43f02c9bafe41c3c1b5d851afec0f2770\"" May 17 00:24:11.304018 containerd[1504]: time="2025-05-17T00:24:11.303867788Z" level=info msg="StartContainer for \"d366aad508f442c8e3e79b8161aface43f02c9bafe41c3c1b5d851afec0f2770\"" May 17 00:24:11.476385 systemd[1]: Started cri-containerd-d366aad508f442c8e3e79b8161aface43f02c9bafe41c3c1b5d851afec0f2770.scope - libcontainer container d366aad508f442c8e3e79b8161aface43f02c9bafe41c3c1b5d851afec0f2770. May 17 00:24:11.514068 containerd[1504]: time="2025-05-17T00:24:11.514000009Z" level=info msg="StartContainer for \"d366aad508f442c8e3e79b8161aface43f02c9bafe41c3c1b5d851afec0f2770\" returns successfully" May 17 00:24:11.627053 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 00:24:11.630099 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 00:24:11.665183 kubelet[2693]: I0517 00:24:11.656822 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-shnsn" podStartSLOduration=0.993079529 podStartE2EDuration="18.644967009s" podCreationTimestamp="2025-05-17 00:23:53 +0000 UTC" firstStartedPulling="2025-05-17 00:23:53.487178222 +0000 UTC m=+19.220730105" lastFinishedPulling="2025-05-17 00:24:11.139065702 +0000 UTC m=+36.872617585" observedRunningTime="2025-05-17 00:24:11.64441991 +0000 UTC m=+37.377971793" watchObservedRunningTime="2025-05-17 00:24:11.644967009 +0000 UTC m=+37.378518892" May 17 00:24:11.787818 containerd[1504]: time="2025-05-17T00:24:11.787782857Z" level=info msg="StopPodSandbox for \"97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b\"" May 17 00:24:12.047315 containerd[1504]: 2025-05-17 00:24:11.862 [INFO][3932] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" May 17 00:24:12.047315 containerd[1504]: 2025-05-17 00:24:11.863 [INFO][3932] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" iface="eth0" netns="/var/run/netns/cni-cb042265-7a10-9569-df45-95587c6051f6" May 17 00:24:12.047315 containerd[1504]: 2025-05-17 00:24:11.863 [INFO][3932] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" iface="eth0" netns="/var/run/netns/cni-cb042265-7a10-9569-df45-95587c6051f6" May 17 00:24:12.047315 containerd[1504]: 2025-05-17 00:24:11.864 [INFO][3932] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" iface="eth0" netns="/var/run/netns/cni-cb042265-7a10-9569-df45-95587c6051f6" May 17 00:24:12.047315 containerd[1504]: 2025-05-17 00:24:11.864 [INFO][3932] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" May 17 00:24:12.047315 containerd[1504]: 2025-05-17 00:24:11.864 [INFO][3932] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" May 17 00:24:12.047315 containerd[1504]: 2025-05-17 00:24:12.024 [INFO][3939] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" HandleID="k8s-pod-network.97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" Workload="ci--4081--3--3--n--82e895e080-k8s-whisker--955f4745b--jt295-eth0" May 17 00:24:12.047315 containerd[1504]: 2025-05-17 00:24:12.028 [INFO][3939] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:12.047315 containerd[1504]: 2025-05-17 00:24:12.029 [INFO][3939] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:12.047315 containerd[1504]: 2025-05-17 00:24:12.041 [WARNING][3939] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" HandleID="k8s-pod-network.97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" Workload="ci--4081--3--3--n--82e895e080-k8s-whisker--955f4745b--jt295-eth0" May 17 00:24:12.047315 containerd[1504]: 2025-05-17 00:24:12.041 [INFO][3939] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" HandleID="k8s-pod-network.97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" Workload="ci--4081--3--3--n--82e895e080-k8s-whisker--955f4745b--jt295-eth0" May 17 00:24:12.047315 containerd[1504]: 2025-05-17 00:24:12.042 [INFO][3939] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:12.047315 containerd[1504]: 2025-05-17 00:24:12.045 [INFO][3932] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" May 17 00:24:12.051415 systemd[1]: run-netns-cni\x2dcb042265\x2d7a10\x2d9569\x2ddf45\x2d95587c6051f6.mount: Deactivated successfully. May 17 00:24:12.055439 containerd[1504]: time="2025-05-17T00:24:12.055410045Z" level=info msg="TearDown network for sandbox \"97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b\" successfully" May 17 00:24:12.055503 containerd[1504]: time="2025-05-17T00:24:12.055491387Z" level=info msg="StopPodSandbox for \"97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b\" returns successfully" May 17 00:24:12.183178 kubelet[2693]: I0517 00:24:12.182832 2693 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/740cc25f-7011-4caa-a72b-ea2dc86d85ee-whisker-ca-bundle\") pod \"740cc25f-7011-4caa-a72b-ea2dc86d85ee\" (UID: \"740cc25f-7011-4caa-a72b-ea2dc86d85ee\") " May 17 00:24:12.183178 kubelet[2693]: I0517 00:24:12.182928 2693 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djlth\" (UniqueName: \"kubernetes.io/projected/740cc25f-7011-4caa-a72b-ea2dc86d85ee-kube-api-access-djlth\") pod \"740cc25f-7011-4caa-a72b-ea2dc86d85ee\" (UID: \"740cc25f-7011-4caa-a72b-ea2dc86d85ee\") " May 17 00:24:12.183178 kubelet[2693]: I0517 00:24:12.182957 2693 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/740cc25f-7011-4caa-a72b-ea2dc86d85ee-whisker-backend-key-pair\") pod \"740cc25f-7011-4caa-a72b-ea2dc86d85ee\" (UID: \"740cc25f-7011-4caa-a72b-ea2dc86d85ee\") " May 17 00:24:12.194961 kubelet[2693]: I0517 00:24:12.193798 2693 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/740cc25f-7011-4caa-a72b-ea2dc86d85ee-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "740cc25f-7011-4caa-a72b-ea2dc86d85ee" (UID: "740cc25f-7011-4caa-a72b-ea2dc86d85ee"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:24:12.202331 kubelet[2693]: I0517 00:24:12.202023 2693 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/740cc25f-7011-4caa-a72b-ea2dc86d85ee-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "740cc25f-7011-4caa-a72b-ea2dc86d85ee" (UID: "740cc25f-7011-4caa-a72b-ea2dc86d85ee"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:24:12.202536 kubelet[2693]: I0517 00:24:12.202494 2693 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/740cc25f-7011-4caa-a72b-ea2dc86d85ee-kube-api-access-djlth" (OuterVolumeSpecName: "kube-api-access-djlth") pod "740cc25f-7011-4caa-a72b-ea2dc86d85ee" (UID: "740cc25f-7011-4caa-a72b-ea2dc86d85ee"). InnerVolumeSpecName "kube-api-access-djlth". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:24:12.204355 systemd[1]: var-lib-kubelet-pods-740cc25f\x2d7011\x2d4caa\x2da72b\x2dea2dc86d85ee-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 17 00:24:12.207369 systemd[1]: var-lib-kubelet-pods-740cc25f\x2d7011\x2d4caa\x2da72b\x2dea2dc86d85ee-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddjlth.mount: Deactivated successfully. May 17 00:24:12.283631 kubelet[2693]: I0517 00:24:12.283574 2693 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-djlth\" (UniqueName: \"kubernetes.io/projected/740cc25f-7011-4caa-a72b-ea2dc86d85ee-kube-api-access-djlth\") on node \"ci-4081-3-3-n-82e895e080\" DevicePath \"\"" May 17 00:24:12.283631 kubelet[2693]: I0517 00:24:12.283631 2693 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/740cc25f-7011-4caa-a72b-ea2dc86d85ee-whisker-backend-key-pair\") on node \"ci-4081-3-3-n-82e895e080\" DevicePath \"\"" May 17 00:24:12.283825 kubelet[2693]: I0517 00:24:12.283653 2693 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/740cc25f-7011-4caa-a72b-ea2dc86d85ee-whisker-ca-bundle\") on node \"ci-4081-3-3-n-82e895e080\" DevicePath \"\"" May 17 00:24:12.390671 systemd[1]: Removed slice kubepods-besteffort-pod740cc25f_7011_4caa_a72b_ea2dc86d85ee.slice - libcontainer container kubepods-besteffort-pod740cc25f_7011_4caa_a72b_ea2dc86d85ee.slice. May 17 00:24:12.602380 kubelet[2693]: I0517 00:24:12.601905 2693 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:24:12.719089 systemd[1]: Created slice kubepods-besteffort-pod067226bb_cfc6_4f82_99de_aac7391d466d.slice - libcontainer container kubepods-besteffort-pod067226bb_cfc6_4f82_99de_aac7391d466d.slice. May 17 00:24:12.786539 kubelet[2693]: I0517 00:24:12.786468 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/067226bb-cfc6-4f82-99de-aac7391d466d-whisker-ca-bundle\") pod \"whisker-f599d797d-pw8hv\" (UID: \"067226bb-cfc6-4f82-99de-aac7391d466d\") " pod="calico-system/whisker-f599d797d-pw8hv" May 17 00:24:12.786539 kubelet[2693]: I0517 00:24:12.786526 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6htsc\" (UniqueName: \"kubernetes.io/projected/067226bb-cfc6-4f82-99de-aac7391d466d-kube-api-access-6htsc\") pod \"whisker-f599d797d-pw8hv\" (UID: \"067226bb-cfc6-4f82-99de-aac7391d466d\") " pod="calico-system/whisker-f599d797d-pw8hv" May 17 00:24:12.786539 kubelet[2693]: I0517 00:24:12.786549 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/067226bb-cfc6-4f82-99de-aac7391d466d-whisker-backend-key-pair\") pod \"whisker-f599d797d-pw8hv\" (UID: \"067226bb-cfc6-4f82-99de-aac7391d466d\") " pod="calico-system/whisker-f599d797d-pw8hv" May 17 00:24:13.023845 containerd[1504]: time="2025-05-17T00:24:13.023462443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f599d797d-pw8hv,Uid:067226bb-cfc6-4f82-99de-aac7391d466d,Namespace:calico-system,Attempt:0,}" May 17 00:24:13.246657 systemd-networkd[1393]: calib0415cf304f: Link UP May 17 00:24:13.246877 systemd-networkd[1393]: calib0415cf304f: Gained carrier May 17 00:24:13.256114 containerd[1504]: 2025-05-17 00:24:13.098 [INFO][3964] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:24:13.256114 containerd[1504]: 2025-05-17 00:24:13.117 [INFO][3964] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--82e895e080-k8s-whisker--f599d797d--pw8hv-eth0 whisker-f599d797d- calico-system 067226bb-cfc6-4f82-99de-aac7391d466d 872 0 2025-05-17 00:24:12 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:f599d797d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-3-n-82e895e080 whisker-f599d797d-pw8hv eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib0415cf304f [] [] }} ContainerID="fb0459ae183f57c6114ba0f117b55e9b1732463a8c286b46750fadaf48a73bdb" Namespace="calico-system" Pod="whisker-f599d797d-pw8hv" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-whisker--f599d797d--pw8hv-" May 17 00:24:13.256114 containerd[1504]: 2025-05-17 00:24:13.117 [INFO][3964] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fb0459ae183f57c6114ba0f117b55e9b1732463a8c286b46750fadaf48a73bdb" Namespace="calico-system" Pod="whisker-f599d797d-pw8hv" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-whisker--f599d797d--pw8hv-eth0" May 17 00:24:13.256114 containerd[1504]: 2025-05-17 00:24:13.169 [INFO][4014] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb0459ae183f57c6114ba0f117b55e9b1732463a8c286b46750fadaf48a73bdb" HandleID="k8s-pod-network.fb0459ae183f57c6114ba0f117b55e9b1732463a8c286b46750fadaf48a73bdb" Workload="ci--4081--3--3--n--82e895e080-k8s-whisker--f599d797d--pw8hv-eth0" May 17 00:24:13.256114 containerd[1504]: 2025-05-17 00:24:13.170 [INFO][4014] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fb0459ae183f57c6114ba0f117b55e9b1732463a8c286b46750fadaf48a73bdb" HandleID="k8s-pod-network.fb0459ae183f57c6114ba0f117b55e9b1732463a8c286b46750fadaf48a73bdb" Workload="ci--4081--3--3--n--82e895e080-k8s-whisker--f599d797d--pw8hv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032a7b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-82e895e080", "pod":"whisker-f599d797d-pw8hv", "timestamp":"2025-05-17 00:24:13.169675269 +0000 UTC"}, Hostname:"ci-4081-3-3-n-82e895e080", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:24:13.256114 containerd[1504]: 2025-05-17 00:24:13.170 [INFO][4014] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:13.256114 containerd[1504]: 2025-05-17 00:24:13.171 [INFO][4014] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:13.256114 containerd[1504]: 2025-05-17 00:24:13.171 [INFO][4014] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-82e895e080' May 17 00:24:13.256114 containerd[1504]: 2025-05-17 00:24:13.181 [INFO][4014] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fb0459ae183f57c6114ba0f117b55e9b1732463a8c286b46750fadaf48a73bdb" host="ci-4081-3-3-n-82e895e080" May 17 00:24:13.256114 containerd[1504]: 2025-05-17 00:24:13.193 [INFO][4014] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-82e895e080" May 17 00:24:13.256114 containerd[1504]: 2025-05-17 00:24:13.199 [INFO][4014] ipam/ipam.go 511: Trying affinity for 192.168.0.0/26 host="ci-4081-3-3-n-82e895e080" May 17 00:24:13.256114 containerd[1504]: 2025-05-17 00:24:13.201 [INFO][4014] ipam/ipam.go 158: Attempting to load block cidr=192.168.0.0/26 host="ci-4081-3-3-n-82e895e080" May 17 00:24:13.256114 containerd[1504]: 2025-05-17 00:24:13.204 [INFO][4014] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.0.0/26 host="ci-4081-3-3-n-82e895e080" May 17 00:24:13.256114 containerd[1504]: 2025-05-17 00:24:13.204 [INFO][4014] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.0.0/26 handle="k8s-pod-network.fb0459ae183f57c6114ba0f117b55e9b1732463a8c286b46750fadaf48a73bdb" host="ci-4081-3-3-n-82e895e080" May 17 00:24:13.256114 containerd[1504]: 2025-05-17 00:24:13.206 [INFO][4014] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fb0459ae183f57c6114ba0f117b55e9b1732463a8c286b46750fadaf48a73bdb May 17 00:24:13.256114 containerd[1504]: 2025-05-17 00:24:13.211 [INFO][4014] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.0.0/26 handle="k8s-pod-network.fb0459ae183f57c6114ba0f117b55e9b1732463a8c286b46750fadaf48a73bdb" host="ci-4081-3-3-n-82e895e080" May 17 00:24:13.256114 containerd[1504]: 2025-05-17 00:24:13.217 [INFO][4014] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.0.1/26] block=192.168.0.0/26 handle="k8s-pod-network.fb0459ae183f57c6114ba0f117b55e9b1732463a8c286b46750fadaf48a73bdb" host="ci-4081-3-3-n-82e895e080" May 17 00:24:13.256114 containerd[1504]: 2025-05-17 00:24:13.217 [INFO][4014] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.0.1/26] handle="k8s-pod-network.fb0459ae183f57c6114ba0f117b55e9b1732463a8c286b46750fadaf48a73bdb" host="ci-4081-3-3-n-82e895e080" May 17 00:24:13.256114 containerd[1504]: 2025-05-17 00:24:13.217 [INFO][4014] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:13.256114 containerd[1504]: 2025-05-17 00:24:13.217 [INFO][4014] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.0.1/26] IPv6=[] ContainerID="fb0459ae183f57c6114ba0f117b55e9b1732463a8c286b46750fadaf48a73bdb" HandleID="k8s-pod-network.fb0459ae183f57c6114ba0f117b55e9b1732463a8c286b46750fadaf48a73bdb" Workload="ci--4081--3--3--n--82e895e080-k8s-whisker--f599d797d--pw8hv-eth0" May 17 00:24:13.258880 containerd[1504]: 2025-05-17 00:24:13.223 [INFO][3964] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fb0459ae183f57c6114ba0f117b55e9b1732463a8c286b46750fadaf48a73bdb" Namespace="calico-system" Pod="whisker-f599d797d-pw8hv" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-whisker--f599d797d--pw8hv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--82e895e080-k8s-whisker--f599d797d--pw8hv-eth0", GenerateName:"whisker-f599d797d-", Namespace:"calico-system", SelfLink:"", UID:"067226bb-cfc6-4f82-99de-aac7391d466d", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f599d797d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-82e895e080", ContainerID:"", Pod:"whisker-f599d797d-pw8hv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.0.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib0415cf304f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:13.258880 containerd[1504]: 2025-05-17 00:24:13.224 [INFO][3964] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.0.1/32] ContainerID="fb0459ae183f57c6114ba0f117b55e9b1732463a8c286b46750fadaf48a73bdb" Namespace="calico-system" Pod="whisker-f599d797d-pw8hv" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-whisker--f599d797d--pw8hv-eth0" May 17 00:24:13.258880 containerd[1504]: 2025-05-17 00:24:13.224 [INFO][3964] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib0415cf304f ContainerID="fb0459ae183f57c6114ba0f117b55e9b1732463a8c286b46750fadaf48a73bdb" Namespace="calico-system" Pod="whisker-f599d797d-pw8hv" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-whisker--f599d797d--pw8hv-eth0" May 17 00:24:13.258880 containerd[1504]: 2025-05-17 00:24:13.234 [INFO][3964] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb0459ae183f57c6114ba0f117b55e9b1732463a8c286b46750fadaf48a73bdb" Namespace="calico-system" Pod="whisker-f599d797d-pw8hv" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-whisker--f599d797d--pw8hv-eth0" May 17 00:24:13.258880 containerd[1504]: 2025-05-17 00:24:13.235 [INFO][3964] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fb0459ae183f57c6114ba0f117b55e9b1732463a8c286b46750fadaf48a73bdb" Namespace="calico-system" Pod="whisker-f599d797d-pw8hv" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-whisker--f599d797d--pw8hv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--82e895e080-k8s-whisker--f599d797d--pw8hv-eth0", GenerateName:"whisker-f599d797d-", Namespace:"calico-system", SelfLink:"", UID:"067226bb-cfc6-4f82-99de-aac7391d466d", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f599d797d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-82e895e080", ContainerID:"fb0459ae183f57c6114ba0f117b55e9b1732463a8c286b46750fadaf48a73bdb", Pod:"whisker-f599d797d-pw8hv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.0.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib0415cf304f", MAC:"de:21:77:62:ef:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:13.258880 containerd[1504]: 2025-05-17 00:24:13.245 [INFO][3964] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fb0459ae183f57c6114ba0f117b55e9b1732463a8c286b46750fadaf48a73bdb" Namespace="calico-system" Pod="whisker-f599d797d-pw8hv" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-whisker--f599d797d--pw8hv-eth0" May 17 00:24:13.300935 containerd[1504]: time="2025-05-17T00:24:13.300566410Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:13.300935 containerd[1504]: time="2025-05-17T00:24:13.300830061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:13.301452 containerd[1504]: time="2025-05-17T00:24:13.301165467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:13.301657 containerd[1504]: time="2025-05-17T00:24:13.301438544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:13.352924 systemd[1]: Started cri-containerd-fb0459ae183f57c6114ba0f117b55e9b1732463a8c286b46750fadaf48a73bdb.scope - libcontainer container fb0459ae183f57c6114ba0f117b55e9b1732463a8c286b46750fadaf48a73bdb. May 17 00:24:13.427559 containerd[1504]: time="2025-05-17T00:24:13.427508594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f599d797d-pw8hv,Uid:067226bb-cfc6-4f82-99de-aac7391d466d,Namespace:calico-system,Attempt:0,} returns sandbox id \"fb0459ae183f57c6114ba0f117b55e9b1732463a8c286b46750fadaf48a73bdb\"" May 17 00:24:13.430922 containerd[1504]: time="2025-05-17T00:24:13.430887720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:24:13.574294 kernel: bpftool[4144]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 17 00:24:13.734728 containerd[1504]: time="2025-05-17T00:24:13.734295801Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:24:13.736836 containerd[1504]: time="2025-05-17T00:24:13.736285968Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:24:13.736836 containerd[1504]: time="2025-05-17T00:24:13.736781863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:24:13.741821 kubelet[2693]: E0517 00:24:13.736690 2693 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:24:13.742433 kubelet[2693]: E0517 00:24:13.742358 2693 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:24:13.751963 kubelet[2693]: E0517 00:24:13.751897 2693 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f857f946af1e45ef9789d38b050b55ff,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6htsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f599d797d-pw8hv_calico-system(067226bb-cfc6-4f82-99de-aac7391d466d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:24:13.756084 containerd[1504]: time="2025-05-17T00:24:13.756052974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:24:13.809065 systemd-networkd[1393]: vxlan.calico: Link UP May 17 00:24:13.809073 systemd-networkd[1393]: vxlan.calico: Gained carrier May 17 00:24:14.083084 containerd[1504]: time="2025-05-17T00:24:14.082799737Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:24:14.085266 containerd[1504]: time="2025-05-17T00:24:14.085082480Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:24:14.085266 containerd[1504]: time="2025-05-17T00:24:14.085179651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:24:14.085456 kubelet[2693]: E0517 00:24:14.085421 2693 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:24:14.085741 kubelet[2693]: E0517 00:24:14.085465 2693 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:24:14.085813 kubelet[2693]: E0517 00:24:14.085567 2693 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6htsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f599d797d-pw8hv_calico-system(067226bb-cfc6-4f82-99de-aac7391d466d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:24:14.087093 kubelet[2693]: E0517 00:24:14.087050 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-f599d797d-pw8hv" podUID="067226bb-cfc6-4f82-99de-aac7391d466d" May 17 00:24:14.378268 kubelet[2693]: I0517 00:24:14.377747 2693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="740cc25f-7011-4caa-a72b-ea2dc86d85ee" path="/var/lib/kubelet/pods/740cc25f-7011-4caa-a72b-ea2dc86d85ee/volumes" May 17 00:24:14.471633 systemd-networkd[1393]: calib0415cf304f: Gained IPv6LL May 17 00:24:14.603693 kubelet[2693]: E0517 00:24:14.603629 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-f599d797d-pw8hv" podUID="067226bb-cfc6-4f82-99de-aac7391d466d" May 17 00:24:15.375303 containerd[1504]: time="2025-05-17T00:24:15.375095008Z" level=info msg="StopPodSandbox for \"dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac\"" May 17 00:24:15.376155 containerd[1504]: time="2025-05-17T00:24:15.375831601Z" level=info msg="StopPodSandbox for \"3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316\"" May 17 00:24:15.378565 containerd[1504]: time="2025-05-17T00:24:15.378449459Z" level=info msg="StopPodSandbox for \"e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812\"" May 17 00:24:15.500731 containerd[1504]: 2025-05-17 00:24:15.448 [INFO][4243] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" May 17 00:24:15.500731 containerd[1504]: 2025-05-17 00:24:15.448 [INFO][4243] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" iface="eth0" netns="/var/run/netns/cni-192c8f43-adc7-6cd0-fb4e-fe3770c53b44" May 17 00:24:15.500731 containerd[1504]: 2025-05-17 00:24:15.448 [INFO][4243] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" iface="eth0" netns="/var/run/netns/cni-192c8f43-adc7-6cd0-fb4e-fe3770c53b44" May 17 00:24:15.500731 containerd[1504]: 2025-05-17 00:24:15.449 [INFO][4243] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" iface="eth0" netns="/var/run/netns/cni-192c8f43-adc7-6cd0-fb4e-fe3770c53b44" May 17 00:24:15.500731 containerd[1504]: 2025-05-17 00:24:15.449 [INFO][4243] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" May 17 00:24:15.500731 containerd[1504]: 2025-05-17 00:24:15.449 [INFO][4243] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" May 17 00:24:15.500731 containerd[1504]: 2025-05-17 00:24:15.479 [INFO][4276] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" HandleID="k8s-pod-network.3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" Workload="ci--4081--3--3--n--82e895e080-k8s-csi--node--driver--ht64s-eth0" May 17 00:24:15.500731 containerd[1504]: 2025-05-17 00:24:15.479 [INFO][4276] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:15.500731 containerd[1504]: 2025-05-17 00:24:15.480 [INFO][4276] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:15.500731 containerd[1504]: 2025-05-17 00:24:15.488 [WARNING][4276] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" HandleID="k8s-pod-network.3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" Workload="ci--4081--3--3--n--82e895e080-k8s-csi--node--driver--ht64s-eth0" May 17 00:24:15.500731 containerd[1504]: 2025-05-17 00:24:15.488 [INFO][4276] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" HandleID="k8s-pod-network.3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" Workload="ci--4081--3--3--n--82e895e080-k8s-csi--node--driver--ht64s-eth0" May 17 00:24:15.500731 containerd[1504]: 2025-05-17 00:24:15.489 [INFO][4276] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:15.500731 containerd[1504]: 2025-05-17 00:24:15.491 [INFO][4243] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" May 17 00:24:15.504181 containerd[1504]: time="2025-05-17T00:24:15.503306974Z" level=info msg="TearDown network for sandbox \"3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316\" successfully" May 17 00:24:15.504181 containerd[1504]: time="2025-05-17T00:24:15.503347320Z" level=info msg="StopPodSandbox for \"3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316\" returns successfully" May 17 00:24:15.504851 systemd[1]: run-netns-cni\x2d192c8f43\x2dadc7\x2d6cd0\x2dfb4e\x2dfe3770c53b44.mount: Deactivated successfully. May 17 00:24:15.507206 containerd[1504]: time="2025-05-17T00:24:15.506850197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ht64s,Uid:79f08894-50f6-4f34-ab06-f713767f2567,Namespace:calico-system,Attempt:1,}" May 17 00:24:15.510700 containerd[1504]: 2025-05-17 00:24:15.441 [INFO][4256] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" May 17 00:24:15.510700 containerd[1504]: 2025-05-17 00:24:15.441 [INFO][4256] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" iface="eth0" netns="/var/run/netns/cni-d7ef1e2c-be37-89a6-4257-d9c8eae0258d" May 17 00:24:15.510700 containerd[1504]: 2025-05-17 00:24:15.441 [INFO][4256] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" iface="eth0" netns="/var/run/netns/cni-d7ef1e2c-be37-89a6-4257-d9c8eae0258d" May 17 00:24:15.510700 containerd[1504]: 2025-05-17 00:24:15.442 [INFO][4256] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" iface="eth0" netns="/var/run/netns/cni-d7ef1e2c-be37-89a6-4257-d9c8eae0258d" May 17 00:24:15.510700 containerd[1504]: 2025-05-17 00:24:15.442 [INFO][4256] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" May 17 00:24:15.510700 containerd[1504]: 2025-05-17 00:24:15.442 [INFO][4256] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" May 17 00:24:15.510700 containerd[1504]: 2025-05-17 00:24:15.489 [INFO][4271] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" HandleID="k8s-pod-network.dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" Workload="ci--4081--3--3--n--82e895e080-k8s-goldmane--78d55f7ddc--9tht4-eth0" May 17 00:24:15.510700 containerd[1504]: 2025-05-17 00:24:15.489 [INFO][4271] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:15.510700 containerd[1504]: 2025-05-17 00:24:15.490 [INFO][4271] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:15.510700 containerd[1504]: 2025-05-17 00:24:15.499 [WARNING][4271] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" HandleID="k8s-pod-network.dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" Workload="ci--4081--3--3--n--82e895e080-k8s-goldmane--78d55f7ddc--9tht4-eth0" May 17 00:24:15.510700 containerd[1504]: 2025-05-17 00:24:15.499 [INFO][4271] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" HandleID="k8s-pod-network.dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" Workload="ci--4081--3--3--n--82e895e080-k8s-goldmane--78d55f7ddc--9tht4-eth0" May 17 00:24:15.510700 containerd[1504]: 2025-05-17 00:24:15.501 [INFO][4271] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:15.510700 containerd[1504]: 2025-05-17 00:24:15.504 [INFO][4256] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" May 17 00:24:15.512694 containerd[1504]: time="2025-05-17T00:24:15.511477391Z" level=info msg="TearDown network for sandbox \"dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac\" successfully" May 17 00:24:15.512694 containerd[1504]: time="2025-05-17T00:24:15.511609076Z" level=info msg="StopPodSandbox for \"dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac\" returns successfully" May 17 00:24:15.515595 systemd[1]: run-netns-cni\x2dd7ef1e2c\x2dbe37\x2d89a6\x2d4257\x2dd9c8eae0258d.mount: Deactivated successfully. May 17 00:24:15.518388 containerd[1504]: time="2025-05-17T00:24:15.518355879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-9tht4,Uid:1bde9b24-cd69-4946-af9c-950fec8a6c4b,Namespace:calico-system,Attempt:1,}" May 17 00:24:15.522514 containerd[1504]: 2025-05-17 00:24:15.459 [INFO][4255] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" May 17 00:24:15.522514 containerd[1504]: 2025-05-17 00:24:15.459 [INFO][4255] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" iface="eth0" netns="/var/run/netns/cni-ce4e4321-f72c-d9cb-0f88-4c7928bbf291" May 17 00:24:15.522514 containerd[1504]: 2025-05-17 00:24:15.459 [INFO][4255] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" iface="eth0" netns="/var/run/netns/cni-ce4e4321-f72c-d9cb-0f88-4c7928bbf291" May 17 00:24:15.522514 containerd[1504]: 2025-05-17 00:24:15.460 [INFO][4255] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" iface="eth0" netns="/var/run/netns/cni-ce4e4321-f72c-d9cb-0f88-4c7928bbf291" May 17 00:24:15.522514 containerd[1504]: 2025-05-17 00:24:15.460 [INFO][4255] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" May 17 00:24:15.522514 containerd[1504]: 2025-05-17 00:24:15.460 [INFO][4255] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" May 17 00:24:15.522514 containerd[1504]: 2025-05-17 00:24:15.506 [INFO][4281] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" HandleID="k8s-pod-network.e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--kube--controllers--8658f94dbd--xvh74-eth0" May 17 00:24:15.522514 containerd[1504]: 2025-05-17 00:24:15.506 [INFO][4281] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:15.522514 containerd[1504]: 2025-05-17 00:24:15.507 [INFO][4281] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:15.522514 containerd[1504]: 2025-05-17 00:24:15.515 [WARNING][4281] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" HandleID="k8s-pod-network.e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--kube--controllers--8658f94dbd--xvh74-eth0" May 17 00:24:15.522514 containerd[1504]: 2025-05-17 00:24:15.515 [INFO][4281] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" HandleID="k8s-pod-network.e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--kube--controllers--8658f94dbd--xvh74-eth0" May 17 00:24:15.522514 containerd[1504]: 2025-05-17 00:24:15.518 [INFO][4281] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:15.522514 containerd[1504]: 2025-05-17 00:24:15.520 [INFO][4255] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" May 17 00:24:15.523235 containerd[1504]: time="2025-05-17T00:24:15.522609685Z" level=info msg="TearDown network for sandbox \"e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812\" successfully" May 17 00:24:15.523235 containerd[1504]: time="2025-05-17T00:24:15.522628651Z" level=info msg="StopPodSandbox for \"e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812\" returns successfully" May 17 00:24:15.526159 containerd[1504]: time="2025-05-17T00:24:15.523727558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8658f94dbd-xvh74,Uid:428ec0d8-1aeb-46da-b77e-f1dfa05702b0,Namespace:calico-system,Attempt:1,}" May 17 00:24:15.525732 systemd[1]: run-netns-cni\x2dce4e4321\x2df72c\x2dd9cb\x2d0f88\x2d4c7928bbf291.mount: Deactivated successfully. May 17 00:24:15.560616 systemd-networkd[1393]: vxlan.calico: Gained IPv6LL May 17 00:24:15.678541 systemd-networkd[1393]: cali1de8e1db570: Link UP May 17 00:24:15.679862 systemd-networkd[1393]: cali1de8e1db570: Gained carrier May 17 00:24:15.698472 containerd[1504]: 2025-05-17 00:24:15.615 [INFO][4310] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--82e895e080-k8s-calico--kube--controllers--8658f94dbd--xvh74-eth0 calico-kube-controllers-8658f94dbd- calico-system 428ec0d8-1aeb-46da-b77e-f1dfa05702b0 900 0 2025-05-17 00:23:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8658f94dbd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-3-n-82e895e080 calico-kube-controllers-8658f94dbd-xvh74 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1de8e1db570 [] [] }} ContainerID="dcacecad2cec7ac134a2d170ef4a79f8ce615eaeeb04c050333b33d41d06358b" Namespace="calico-system" Pod="calico-kube-controllers-8658f94dbd-xvh74" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-calico--kube--controllers--8658f94dbd--xvh74-" May 17 00:24:15.698472 containerd[1504]: 2025-05-17 00:24:15.615 [INFO][4310] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dcacecad2cec7ac134a2d170ef4a79f8ce615eaeeb04c050333b33d41d06358b" Namespace="calico-system" Pod="calico-kube-controllers-8658f94dbd-xvh74" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-calico--kube--controllers--8658f94dbd--xvh74-eth0" May 17 00:24:15.698472 containerd[1504]: 2025-05-17 00:24:15.641 [INFO][4339] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dcacecad2cec7ac134a2d170ef4a79f8ce615eaeeb04c050333b33d41d06358b" HandleID="k8s-pod-network.dcacecad2cec7ac134a2d170ef4a79f8ce615eaeeb04c050333b33d41d06358b" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--kube--controllers--8658f94dbd--xvh74-eth0" May 17 00:24:15.698472 containerd[1504]: 2025-05-17 00:24:15.642 [INFO][4339] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dcacecad2cec7ac134a2d170ef4a79f8ce615eaeeb04c050333b33d41d06358b" HandleID="k8s-pod-network.dcacecad2cec7ac134a2d170ef4a79f8ce615eaeeb04c050333b33d41d06358b" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--kube--controllers--8658f94dbd--xvh74-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9020), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-82e895e080", "pod":"calico-kube-controllers-8658f94dbd-xvh74", "timestamp":"2025-05-17 00:24:15.641915876 +0000 UTC"}, Hostname:"ci-4081-3-3-n-82e895e080", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:24:15.698472 containerd[1504]: 2025-05-17 00:24:15.642 [INFO][4339] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:15.698472 containerd[1504]: 2025-05-17 00:24:15.642 [INFO][4339] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:15.698472 containerd[1504]: 2025-05-17 00:24:15.642 [INFO][4339] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-82e895e080' May 17 00:24:15.698472 containerd[1504]: 2025-05-17 00:24:15.649 [INFO][4339] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dcacecad2cec7ac134a2d170ef4a79f8ce615eaeeb04c050333b33d41d06358b" host="ci-4081-3-3-n-82e895e080" May 17 00:24:15.698472 containerd[1504]: 2025-05-17 00:24:15.655 [INFO][4339] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-82e895e080" May 17 00:24:15.698472 containerd[1504]: 2025-05-17 00:24:15.659 [INFO][4339] ipam/ipam.go 511: Trying affinity for 192.168.0.0/26 host="ci-4081-3-3-n-82e895e080" May 17 00:24:15.698472 containerd[1504]: 2025-05-17 00:24:15.661 [INFO][4339] ipam/ipam.go 158: Attempting to load block cidr=192.168.0.0/26 host="ci-4081-3-3-n-82e895e080" May 17 00:24:15.698472 containerd[1504]: 2025-05-17 00:24:15.662 [INFO][4339] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.0.0/26 host="ci-4081-3-3-n-82e895e080" May 17 00:24:15.698472 containerd[1504]: 2025-05-17 00:24:15.662 [INFO][4339] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.0.0/26 handle="k8s-pod-network.dcacecad2cec7ac134a2d170ef4a79f8ce615eaeeb04c050333b33d41d06358b" host="ci-4081-3-3-n-82e895e080" May 17 00:24:15.698472 containerd[1504]: 2025-05-17 00:24:15.664 [INFO][4339] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dcacecad2cec7ac134a2d170ef4a79f8ce615eaeeb04c050333b33d41d06358b May 17 00:24:15.698472 containerd[1504]: 2025-05-17 00:24:15.668 [INFO][4339] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.0.0/26 handle="k8s-pod-network.dcacecad2cec7ac134a2d170ef4a79f8ce615eaeeb04c050333b33d41d06358b" host="ci-4081-3-3-n-82e895e080" May 17 00:24:15.698472 containerd[1504]: 2025-05-17 00:24:15.673 [INFO][4339] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.0.2/26] block=192.168.0.0/26 handle="k8s-pod-network.dcacecad2cec7ac134a2d170ef4a79f8ce615eaeeb04c050333b33d41d06358b" host="ci-4081-3-3-n-82e895e080" May 17 00:24:15.698472 containerd[1504]: 2025-05-17 00:24:15.673 [INFO][4339] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.0.2/26] handle="k8s-pod-network.dcacecad2cec7ac134a2d170ef4a79f8ce615eaeeb04c050333b33d41d06358b" host="ci-4081-3-3-n-82e895e080" May 17 00:24:15.698472 containerd[1504]: 2025-05-17 00:24:15.673 [INFO][4339] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:15.698472 containerd[1504]: 2025-05-17 00:24:15.673 [INFO][4339] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.0.2/26] IPv6=[] ContainerID="dcacecad2cec7ac134a2d170ef4a79f8ce615eaeeb04c050333b33d41d06358b" HandleID="k8s-pod-network.dcacecad2cec7ac134a2d170ef4a79f8ce615eaeeb04c050333b33d41d06358b" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--kube--controllers--8658f94dbd--xvh74-eth0" May 17 00:24:15.699919 containerd[1504]: 2025-05-17 00:24:15.675 [INFO][4310] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dcacecad2cec7ac134a2d170ef4a79f8ce615eaeeb04c050333b33d41d06358b" Namespace="calico-system" Pod="calico-kube-controllers-8658f94dbd-xvh74" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-calico--kube--controllers--8658f94dbd--xvh74-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--82e895e080-k8s-calico--kube--controllers--8658f94dbd--xvh74-eth0", GenerateName:"calico-kube-controllers-8658f94dbd-", Namespace:"calico-system", SelfLink:"", UID:"428ec0d8-1aeb-46da-b77e-f1dfa05702b0", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8658f94dbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-82e895e080", ContainerID:"", Pod:"calico-kube-controllers-8658f94dbd-xvh74", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.0.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1de8e1db570", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:15.699919 containerd[1504]: 2025-05-17 00:24:15.675 [INFO][4310] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.0.2/32] ContainerID="dcacecad2cec7ac134a2d170ef4a79f8ce615eaeeb04c050333b33d41d06358b" Namespace="calico-system" Pod="calico-kube-controllers-8658f94dbd-xvh74" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-calico--kube--controllers--8658f94dbd--xvh74-eth0" May 17 00:24:15.699919 containerd[1504]: 2025-05-17 00:24:15.675 [INFO][4310] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1de8e1db570 ContainerID="dcacecad2cec7ac134a2d170ef4a79f8ce615eaeeb04c050333b33d41d06358b" Namespace="calico-system" Pod="calico-kube-controllers-8658f94dbd-xvh74" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-calico--kube--controllers--8658f94dbd--xvh74-eth0" May 17 00:24:15.699919 containerd[1504]: 2025-05-17 00:24:15.681 [INFO][4310] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dcacecad2cec7ac134a2d170ef4a79f8ce615eaeeb04c050333b33d41d06358b" Namespace="calico-system" Pod="calico-kube-controllers-8658f94dbd-xvh74" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-calico--kube--controllers--8658f94dbd--xvh74-eth0" May 17 00:24:15.699919 containerd[1504]: 2025-05-17 00:24:15.683 [INFO][4310] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dcacecad2cec7ac134a2d170ef4a79f8ce615eaeeb04c050333b33d41d06358b" Namespace="calico-system" Pod="calico-kube-controllers-8658f94dbd-xvh74" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-calico--kube--controllers--8658f94dbd--xvh74-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--82e895e080-k8s-calico--kube--controllers--8658f94dbd--xvh74-eth0", GenerateName:"calico-kube-controllers-8658f94dbd-", Namespace:"calico-system", SelfLink:"", UID:"428ec0d8-1aeb-46da-b77e-f1dfa05702b0", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8658f94dbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-82e895e080", ContainerID:"dcacecad2cec7ac134a2d170ef4a79f8ce615eaeeb04c050333b33d41d06358b", Pod:"calico-kube-controllers-8658f94dbd-xvh74", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.0.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1de8e1db570", MAC:"e2:a4:39:42:d6:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:15.699919 containerd[1504]: 2025-05-17 00:24:15.693 [INFO][4310] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dcacecad2cec7ac134a2d170ef4a79f8ce615eaeeb04c050333b33d41d06358b" Namespace="calico-system" Pod="calico-kube-controllers-8658f94dbd-xvh74" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-calico--kube--controllers--8658f94dbd--xvh74-eth0" May 17 00:24:15.715439 containerd[1504]: time="2025-05-17T00:24:15.715154467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:15.715439 containerd[1504]: time="2025-05-17T00:24:15.715238233Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:15.715439 containerd[1504]: time="2025-05-17T00:24:15.715274892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:15.715439 containerd[1504]: time="2025-05-17T00:24:15.715357866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:15.731374 systemd[1]: Started cri-containerd-dcacecad2cec7ac134a2d170ef4a79f8ce615eaeeb04c050333b33d41d06358b.scope - libcontainer container dcacecad2cec7ac134a2d170ef4a79f8ce615eaeeb04c050333b33d41d06358b. May 17 00:24:15.769248 containerd[1504]: time="2025-05-17T00:24:15.769178633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8658f94dbd-xvh74,Uid:428ec0d8-1aeb-46da-b77e-f1dfa05702b0,Namespace:calico-system,Attempt:1,} returns sandbox id \"dcacecad2cec7ac134a2d170ef4a79f8ce615eaeeb04c050333b33d41d06358b\"" May 17 00:24:15.771467 containerd[1504]: time="2025-05-17T00:24:15.771404220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 17 00:24:15.792035 systemd-networkd[1393]: cali597f6480dfa: Link UP May 17 00:24:15.793407 systemd-networkd[1393]: cali597f6480dfa: Gained carrier May 17 00:24:15.811817 containerd[1504]: 2025-05-17 00:24:15.589 [INFO][4292] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--82e895e080-k8s-csi--node--driver--ht64s-eth0 csi-node-driver- calico-system 79f08894-50f6-4f34-ab06-f713767f2567 899 0 2025-05-17 00:23:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-3-n-82e895e080 csi-node-driver-ht64s eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali597f6480dfa [] [] }} ContainerID="966dbb7262a3c9aed2d9902b8ad215fb557e804c1bf2dd91e57d27d3b301a328" Namespace="calico-system" Pod="csi-node-driver-ht64s" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-csi--node--driver--ht64s-" May 17 00:24:15.811817 containerd[1504]: 2025-05-17 00:24:15.589 [INFO][4292] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="966dbb7262a3c9aed2d9902b8ad215fb557e804c1bf2dd91e57d27d3b301a328" Namespace="calico-system" Pod="csi-node-driver-ht64s" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-csi--node--driver--ht64s-eth0" May 17 00:24:15.811817 containerd[1504]: 2025-05-17 00:24:15.645 [INFO][4326] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="966dbb7262a3c9aed2d9902b8ad215fb557e804c1bf2dd91e57d27d3b301a328" HandleID="k8s-pod-network.966dbb7262a3c9aed2d9902b8ad215fb557e804c1bf2dd91e57d27d3b301a328" Workload="ci--4081--3--3--n--82e895e080-k8s-csi--node--driver--ht64s-eth0" May 17 00:24:15.811817 containerd[1504]: 2025-05-17 00:24:15.646 [INFO][4326] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="966dbb7262a3c9aed2d9902b8ad215fb557e804c1bf2dd91e57d27d3b301a328" HandleID="k8s-pod-network.966dbb7262a3c9aed2d9902b8ad215fb557e804c1bf2dd91e57d27d3b301a328" Workload="ci--4081--3--3--n--82e895e080-k8s-csi--node--driver--ht64s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9630), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-82e895e080", "pod":"csi-node-driver-ht64s", "timestamp":"2025-05-17 00:24:15.64593256 +0000 UTC"}, Hostname:"ci-4081-3-3-n-82e895e080", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:24:15.811817 containerd[1504]: 2025-05-17 00:24:15.646 [INFO][4326] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:15.811817 containerd[1504]: 2025-05-17 00:24:15.673 [INFO][4326] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:15.811817 containerd[1504]: 2025-05-17 00:24:15.673 [INFO][4326] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-82e895e080' May 17 00:24:15.811817 containerd[1504]: 2025-05-17 00:24:15.750 [INFO][4326] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.966dbb7262a3c9aed2d9902b8ad215fb557e804c1bf2dd91e57d27d3b301a328" host="ci-4081-3-3-n-82e895e080" May 17 00:24:15.811817 containerd[1504]: 2025-05-17 00:24:15.756 [INFO][4326] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-82e895e080" May 17 00:24:15.811817 containerd[1504]: 2025-05-17 00:24:15.765 [INFO][4326] ipam/ipam.go 511: Trying affinity for 192.168.0.0/26 host="ci-4081-3-3-n-82e895e080" May 17 00:24:15.811817 containerd[1504]: 2025-05-17 00:24:15.767 [INFO][4326] ipam/ipam.go 158: Attempting to load block cidr=192.168.0.0/26 host="ci-4081-3-3-n-82e895e080" May 17 00:24:15.811817 containerd[1504]: 2025-05-17 00:24:15.771 [INFO][4326] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.0.0/26 host="ci-4081-3-3-n-82e895e080" May 17 00:24:15.811817 containerd[1504]: 2025-05-17 00:24:15.772 [INFO][4326] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.0.0/26 handle="k8s-pod-network.966dbb7262a3c9aed2d9902b8ad215fb557e804c1bf2dd91e57d27d3b301a328" host="ci-4081-3-3-n-82e895e080" May 17 00:24:15.811817 containerd[1504]: 2025-05-17 00:24:15.774 [INFO][4326] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.966dbb7262a3c9aed2d9902b8ad215fb557e804c1bf2dd91e57d27d3b301a328 May 17 00:24:15.811817 containerd[1504]: 2025-05-17 00:24:15.779 [INFO][4326] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.0.0/26 handle="k8s-pod-network.966dbb7262a3c9aed2d9902b8ad215fb557e804c1bf2dd91e57d27d3b301a328" host="ci-4081-3-3-n-82e895e080" May 17 00:24:15.811817 containerd[1504]: 2025-05-17 00:24:15.784 [INFO][4326] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.0.3/26] block=192.168.0.0/26 handle="k8s-pod-network.966dbb7262a3c9aed2d9902b8ad215fb557e804c1bf2dd91e57d27d3b301a328" host="ci-4081-3-3-n-82e895e080" May 17 00:24:15.811817 containerd[1504]: 2025-05-17 00:24:15.785 [INFO][4326] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.0.3/26] handle="k8s-pod-network.966dbb7262a3c9aed2d9902b8ad215fb557e804c1bf2dd91e57d27d3b301a328" host="ci-4081-3-3-n-82e895e080" May 17 00:24:15.811817 containerd[1504]: 2025-05-17 00:24:15.785 [INFO][4326] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:15.811817 containerd[1504]: 2025-05-17 00:24:15.785 [INFO][4326] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.0.3/26] IPv6=[] ContainerID="966dbb7262a3c9aed2d9902b8ad215fb557e804c1bf2dd91e57d27d3b301a328" HandleID="k8s-pod-network.966dbb7262a3c9aed2d9902b8ad215fb557e804c1bf2dd91e57d27d3b301a328" Workload="ci--4081--3--3--n--82e895e080-k8s-csi--node--driver--ht64s-eth0" May 17 00:24:15.812823 containerd[1504]: 2025-05-17 00:24:15.787 [INFO][4292] cni-plugin/k8s.go 418: Populated endpoint ContainerID="966dbb7262a3c9aed2d9902b8ad215fb557e804c1bf2dd91e57d27d3b301a328" Namespace="calico-system" Pod="csi-node-driver-ht64s" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-csi--node--driver--ht64s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--82e895e080-k8s-csi--node--driver--ht64s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"79f08894-50f6-4f34-ab06-f713767f2567", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-82e895e080", ContainerID:"", Pod:"csi-node-driver-ht64s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.0.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali597f6480dfa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:15.812823 containerd[1504]: 2025-05-17 00:24:15.787 [INFO][4292] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.0.3/32] ContainerID="966dbb7262a3c9aed2d9902b8ad215fb557e804c1bf2dd91e57d27d3b301a328" Namespace="calico-system" Pod="csi-node-driver-ht64s" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-csi--node--driver--ht64s-eth0" May 17 00:24:15.812823 containerd[1504]: 2025-05-17 00:24:15.787 [INFO][4292] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali597f6480dfa ContainerID="966dbb7262a3c9aed2d9902b8ad215fb557e804c1bf2dd91e57d27d3b301a328" Namespace="calico-system" Pod="csi-node-driver-ht64s" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-csi--node--driver--ht64s-eth0" May 17 00:24:15.812823 containerd[1504]: 2025-05-17 00:24:15.794 [INFO][4292] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="966dbb7262a3c9aed2d9902b8ad215fb557e804c1bf2dd91e57d27d3b301a328" Namespace="calico-system" Pod="csi-node-driver-ht64s" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-csi--node--driver--ht64s-eth0" May 17 00:24:15.812823 containerd[1504]: 2025-05-17 00:24:15.795 [INFO][4292] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="966dbb7262a3c9aed2d9902b8ad215fb557e804c1bf2dd91e57d27d3b301a328" Namespace="calico-system" Pod="csi-node-driver-ht64s" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-csi--node--driver--ht64s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--82e895e080-k8s-csi--node--driver--ht64s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"79f08894-50f6-4f34-ab06-f713767f2567", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-82e895e080", ContainerID:"966dbb7262a3c9aed2d9902b8ad215fb557e804c1bf2dd91e57d27d3b301a328", Pod:"csi-node-driver-ht64s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.0.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali597f6480dfa", MAC:"c2:99:19:e6:12:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:15.812823 containerd[1504]: 2025-05-17 00:24:15.808 [INFO][4292] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="966dbb7262a3c9aed2d9902b8ad215fb557e804c1bf2dd91e57d27d3b301a328" Namespace="calico-system" Pod="csi-node-driver-ht64s" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-csi--node--driver--ht64s-eth0" May 17 00:24:15.827769 containerd[1504]: time="2025-05-17T00:24:15.827668110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:15.827769 containerd[1504]: time="2025-05-17T00:24:15.827732490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:15.827769 containerd[1504]: time="2025-05-17T00:24:15.827745124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:15.827964 containerd[1504]: time="2025-05-17T00:24:15.827815194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:15.844389 systemd[1]: Started cri-containerd-966dbb7262a3c9aed2d9902b8ad215fb557e804c1bf2dd91e57d27d3b301a328.scope - libcontainer container 966dbb7262a3c9aed2d9902b8ad215fb557e804c1bf2dd91e57d27d3b301a328. May 17 00:24:15.867849 containerd[1504]: time="2025-05-17T00:24:15.867778135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ht64s,Uid:79f08894-50f6-4f34-ab06-f713767f2567,Namespace:calico-system,Attempt:1,} returns sandbox id \"966dbb7262a3c9aed2d9902b8ad215fb557e804c1bf2dd91e57d27d3b301a328\"" May 17 00:24:15.890977 systemd-networkd[1393]: cali40d0116a55a: Link UP May 17 00:24:15.893373 systemd-networkd[1393]: cali40d0116a55a: Gained carrier May 17 00:24:15.917603 containerd[1504]: 2025-05-17 00:24:15.604 [INFO][4301] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--82e895e080-k8s-goldmane--78d55f7ddc--9tht4-eth0 goldmane-78d55f7ddc- calico-system 1bde9b24-cd69-4946-af9c-950fec8a6c4b 898 0 2025-05-17 00:23:52 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-3-n-82e895e080 goldmane-78d55f7ddc-9tht4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali40d0116a55a [] [] }} ContainerID="8590339d02798dcfe23985dfc160b421c473b04bc845ada125df701071f1d8d0" Namespace="calico-system" Pod="goldmane-78d55f7ddc-9tht4" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-goldmane--78d55f7ddc--9tht4-" May 17 00:24:15.917603 containerd[1504]: 2025-05-17 00:24:15.605 [INFO][4301] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8590339d02798dcfe23985dfc160b421c473b04bc845ada125df701071f1d8d0" Namespace="calico-system" Pod="goldmane-78d55f7ddc-9tht4" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-goldmane--78d55f7ddc--9tht4-eth0" May 17 00:24:15.917603 containerd[1504]: 2025-05-17 00:24:15.653 [INFO][4333] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8590339d02798dcfe23985dfc160b421c473b04bc845ada125df701071f1d8d0" HandleID="k8s-pod-network.8590339d02798dcfe23985dfc160b421c473b04bc845ada125df701071f1d8d0" Workload="ci--4081--3--3--n--82e895e080-k8s-goldmane--78d55f7ddc--9tht4-eth0" May 17 00:24:15.917603 containerd[1504]: 2025-05-17 00:24:15.653 [INFO][4333] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8590339d02798dcfe23985dfc160b421c473b04bc845ada125df701071f1d8d0" HandleID="k8s-pod-network.8590339d02798dcfe23985dfc160b421c473b04bc845ada125df701071f1d8d0" Workload="ci--4081--3--3--n--82e895e080-k8s-goldmane--78d55f7ddc--9tht4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00023b240), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-82e895e080", "pod":"goldmane-78d55f7ddc-9tht4", "timestamp":"2025-05-17 00:24:15.653075952 +0000 UTC"}, Hostname:"ci-4081-3-3-n-82e895e080", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:24:15.917603 containerd[1504]: 2025-05-17 00:24:15.653 [INFO][4333] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:15.917603 containerd[1504]: 2025-05-17 00:24:15.785 [INFO][4333] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:15.917603 containerd[1504]: 2025-05-17 00:24:15.785 [INFO][4333] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-82e895e080' May 17 00:24:15.917603 containerd[1504]: 2025-05-17 00:24:15.851 [INFO][4333] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8590339d02798dcfe23985dfc160b421c473b04bc845ada125df701071f1d8d0" host="ci-4081-3-3-n-82e895e080" May 17 00:24:15.917603 containerd[1504]: 2025-05-17 00:24:15.856 [INFO][4333] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-82e895e080" May 17 00:24:15.917603 containerd[1504]: 2025-05-17 00:24:15.863 [INFO][4333] ipam/ipam.go 511: Trying affinity for 192.168.0.0/26 host="ci-4081-3-3-n-82e895e080" May 17 00:24:15.917603 containerd[1504]: 2025-05-17 00:24:15.865 [INFO][4333] ipam/ipam.go 158: Attempting to load block cidr=192.168.0.0/26 host="ci-4081-3-3-n-82e895e080" May 17 00:24:15.917603 containerd[1504]: 2025-05-17 00:24:15.869 [INFO][4333] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.0.0/26 host="ci-4081-3-3-n-82e895e080" May 17 00:24:15.917603 containerd[1504]: 2025-05-17 00:24:15.869 [INFO][4333] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.0.0/26 handle="k8s-pod-network.8590339d02798dcfe23985dfc160b421c473b04bc845ada125df701071f1d8d0" host="ci-4081-3-3-n-82e895e080" May 17 00:24:15.917603 containerd[1504]: 2025-05-17 00:24:15.874 [INFO][4333] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8590339d02798dcfe23985dfc160b421c473b04bc845ada125df701071f1d8d0 May 17 00:24:15.917603 containerd[1504]: 2025-05-17 00:24:15.879 [INFO][4333] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.0.0/26 handle="k8s-pod-network.8590339d02798dcfe23985dfc160b421c473b04bc845ada125df701071f1d8d0" host="ci-4081-3-3-n-82e895e080" May 17 00:24:15.917603 containerd[1504]: 2025-05-17 00:24:15.885 [INFO][4333] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.0.4/26] block=192.168.0.0/26 handle="k8s-pod-network.8590339d02798dcfe23985dfc160b421c473b04bc845ada125df701071f1d8d0" host="ci-4081-3-3-n-82e895e080" May 17 00:24:15.917603 containerd[1504]: 2025-05-17 00:24:15.885 [INFO][4333] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.0.4/26] handle="k8s-pod-network.8590339d02798dcfe23985dfc160b421c473b04bc845ada125df701071f1d8d0" host="ci-4081-3-3-n-82e895e080" May 17 00:24:15.917603 containerd[1504]: 2025-05-17 00:24:15.885 [INFO][4333] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:15.917603 containerd[1504]: 2025-05-17 00:24:15.885 [INFO][4333] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.0.4/26] IPv6=[] ContainerID="8590339d02798dcfe23985dfc160b421c473b04bc845ada125df701071f1d8d0" HandleID="k8s-pod-network.8590339d02798dcfe23985dfc160b421c473b04bc845ada125df701071f1d8d0" Workload="ci--4081--3--3--n--82e895e080-k8s-goldmane--78d55f7ddc--9tht4-eth0" May 17 00:24:15.920530 containerd[1504]: 2025-05-17 00:24:15.887 [INFO][4301] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8590339d02798dcfe23985dfc160b421c473b04bc845ada125df701071f1d8d0" Namespace="calico-system" Pod="goldmane-78d55f7ddc-9tht4" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-goldmane--78d55f7ddc--9tht4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--82e895e080-k8s-goldmane--78d55f7ddc--9tht4-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"1bde9b24-cd69-4946-af9c-950fec8a6c4b", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-82e895e080", ContainerID:"", Pod:"goldmane-78d55f7ddc-9tht4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.0.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali40d0116a55a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:15.920530 containerd[1504]: 2025-05-17 00:24:15.887 [INFO][4301] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.0.4/32] ContainerID="8590339d02798dcfe23985dfc160b421c473b04bc845ada125df701071f1d8d0" Namespace="calico-system" Pod="goldmane-78d55f7ddc-9tht4" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-goldmane--78d55f7ddc--9tht4-eth0" May 17 00:24:15.920530 containerd[1504]: 2025-05-17 00:24:15.887 [INFO][4301] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali40d0116a55a ContainerID="8590339d02798dcfe23985dfc160b421c473b04bc845ada125df701071f1d8d0" Namespace="calico-system" Pod="goldmane-78d55f7ddc-9tht4" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-goldmane--78d55f7ddc--9tht4-eth0" May 17 00:24:15.920530 containerd[1504]: 2025-05-17 00:24:15.896 [INFO][4301] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8590339d02798dcfe23985dfc160b421c473b04bc845ada125df701071f1d8d0" Namespace="calico-system" Pod="goldmane-78d55f7ddc-9tht4" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-goldmane--78d55f7ddc--9tht4-eth0" May 17 00:24:15.920530 containerd[1504]: 2025-05-17 00:24:15.900 [INFO][4301] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8590339d02798dcfe23985dfc160b421c473b04bc845ada125df701071f1d8d0" Namespace="calico-system" Pod="goldmane-78d55f7ddc-9tht4" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-goldmane--78d55f7ddc--9tht4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--82e895e080-k8s-goldmane--78d55f7ddc--9tht4-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"1bde9b24-cd69-4946-af9c-950fec8a6c4b", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-82e895e080", ContainerID:"8590339d02798dcfe23985dfc160b421c473b04bc845ada125df701071f1d8d0", Pod:"goldmane-78d55f7ddc-9tht4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.0.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali40d0116a55a", MAC:"ee:53:70:ac:0e:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:15.920530 containerd[1504]: 2025-05-17 00:24:15.914 [INFO][4301] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8590339d02798dcfe23985dfc160b421c473b04bc845ada125df701071f1d8d0" Namespace="calico-system" Pod="goldmane-78d55f7ddc-9tht4" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-goldmane--78d55f7ddc--9tht4-eth0" May 17 00:24:15.938173 containerd[1504]: time="2025-05-17T00:24:15.938007259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:15.938299 containerd[1504]: time="2025-05-17T00:24:15.938088752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:15.938299 containerd[1504]: time="2025-05-17T00:24:15.938119549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:15.938299 containerd[1504]: time="2025-05-17T00:24:15.938187025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:15.956384 systemd[1]: Started cri-containerd-8590339d02798dcfe23985dfc160b421c473b04bc845ada125df701071f1d8d0.scope - libcontainer container 8590339d02798dcfe23985dfc160b421c473b04bc845ada125df701071f1d8d0. May 17 00:24:15.990996 containerd[1504]: time="2025-05-17T00:24:15.990948387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-9tht4,Uid:1bde9b24-cd69-4946-af9c-950fec8a6c4b,Namespace:calico-system,Attempt:1,} returns sandbox id \"8590339d02798dcfe23985dfc160b421c473b04bc845ada125df701071f1d8d0\"" May 17 00:24:16.967509 systemd-networkd[1393]: cali40d0116a55a: Gained IPv6LL May 17 00:24:17.160395 systemd-networkd[1393]: cali597f6480dfa: Gained IPv6LL May 17 00:24:17.161924 systemd-networkd[1393]: cali1de8e1db570: Gained IPv6LL May 17 00:24:17.376100 containerd[1504]: time="2025-05-17T00:24:17.375742137Z" level=info msg="StopPodSandbox for \"fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa\"" May 17 00:24:17.377325 containerd[1504]: time="2025-05-17T00:24:17.376700613Z" level=info msg="StopPodSandbox for \"c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8\"" May 17 00:24:17.378928 containerd[1504]: time="2025-05-17T00:24:17.378894764Z" level=info msg="StopPodSandbox for \"bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49\"" May 17 00:24:17.501160 containerd[1504]: 2025-05-17 00:24:17.463 [INFO][4524] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" May 17 00:24:17.501160 containerd[1504]: 2025-05-17 00:24:17.466 [INFO][4524] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" iface="eth0" netns="/var/run/netns/cni-2bffc1e6-869a-4c03-28c8-544197c539c3" May 17 00:24:17.501160 containerd[1504]: 2025-05-17 00:24:17.466 [INFO][4524] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" iface="eth0" netns="/var/run/netns/cni-2bffc1e6-869a-4c03-28c8-544197c539c3" May 17 00:24:17.501160 containerd[1504]: 2025-05-17 00:24:17.467 [INFO][4524] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" iface="eth0" netns="/var/run/netns/cni-2bffc1e6-869a-4c03-28c8-544197c539c3" May 17 00:24:17.501160 containerd[1504]: 2025-05-17 00:24:17.467 [INFO][4524] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" May 17 00:24:17.501160 containerd[1504]: 2025-05-17 00:24:17.468 [INFO][4524] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" May 17 00:24:17.501160 containerd[1504]: 2025-05-17 00:24:17.487 [INFO][4553] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" HandleID="k8s-pod-network.c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" Workload="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--2n42x-eth0" May 17 00:24:17.501160 containerd[1504]: 2025-05-17 00:24:17.487 [INFO][4553] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:17.501160 containerd[1504]: 2025-05-17 00:24:17.488 [INFO][4553] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:17.501160 containerd[1504]: 2025-05-17 00:24:17.495 [WARNING][4553] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" HandleID="k8s-pod-network.c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" Workload="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--2n42x-eth0" May 17 00:24:17.501160 containerd[1504]: 2025-05-17 00:24:17.495 [INFO][4553] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" HandleID="k8s-pod-network.c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" Workload="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--2n42x-eth0" May 17 00:24:17.501160 containerd[1504]: 2025-05-17 00:24:17.496 [INFO][4553] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:17.501160 containerd[1504]: 2025-05-17 00:24:17.499 [INFO][4524] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" May 17 00:24:17.501787 containerd[1504]: time="2025-05-17T00:24:17.501701718Z" level=info msg="TearDown network for sandbox \"c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8\" successfully" May 17 00:24:17.501841 containerd[1504]: time="2025-05-17T00:24:17.501739700Z" level=info msg="StopPodSandbox for \"c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8\" returns successfully" May 17 00:24:17.504381 containerd[1504]: time="2025-05-17T00:24:17.504360204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2n42x,Uid:30984a25-4953-4a27-9699-f4c7434a26ed,Namespace:kube-system,Attempt:1,}" May 17 00:24:17.506202 systemd[1]: run-netns-cni\x2d2bffc1e6\x2d869a\x2d4c03\x2d28c8\x2d544197c539c3.mount: Deactivated successfully. May 17 00:24:17.521261 containerd[1504]: 2025-05-17 00:24:17.445 [INFO][4521] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" May 17 00:24:17.521261 containerd[1504]: 2025-05-17 00:24:17.446 [INFO][4521] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" iface="eth0" netns="/var/run/netns/cni-ffe3f709-860f-63d0-7286-da32d39cced9" May 17 00:24:17.521261 containerd[1504]: 2025-05-17 00:24:17.446 [INFO][4521] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" iface="eth0" netns="/var/run/netns/cni-ffe3f709-860f-63d0-7286-da32d39cced9" May 17 00:24:17.521261 containerd[1504]: 2025-05-17 00:24:17.447 [INFO][4521] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" iface="eth0" netns="/var/run/netns/cni-ffe3f709-860f-63d0-7286-da32d39cced9" May 17 00:24:17.521261 containerd[1504]: 2025-05-17 00:24:17.447 [INFO][4521] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" May 17 00:24:17.521261 containerd[1504]: 2025-05-17 00:24:17.447 [INFO][4521] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" May 17 00:24:17.521261 containerd[1504]: 2025-05-17 00:24:17.494 [INFO][4542] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" HandleID="k8s-pod-network.fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--qfnvl-eth0" May 17 00:24:17.521261 containerd[1504]: 2025-05-17 00:24:17.494 [INFO][4542] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:17.521261 containerd[1504]: 2025-05-17 00:24:17.496 [INFO][4542] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:17.521261 containerd[1504]: 2025-05-17 00:24:17.508 [WARNING][4542] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" HandleID="k8s-pod-network.fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--qfnvl-eth0" May 17 00:24:17.521261 containerd[1504]: 2025-05-17 00:24:17.509 [INFO][4542] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" HandleID="k8s-pod-network.fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--qfnvl-eth0" May 17 00:24:17.521261 containerd[1504]: 2025-05-17 00:24:17.511 [INFO][4542] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:17.521261 containerd[1504]: 2025-05-17 00:24:17.514 [INFO][4521] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" May 17 00:24:17.521261 containerd[1504]: time="2025-05-17T00:24:17.519567560Z" level=info msg="TearDown network for sandbox \"fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa\" successfully" May 17 00:24:17.521261 containerd[1504]: time="2025-05-17T00:24:17.519589030Z" level=info msg="StopPodSandbox for \"fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa\" returns successfully" May 17 00:24:17.522906 systemd[1]: run-netns-cni\x2dffe3f709\x2d860f\x2d63d0\x2d7286\x2dda32d39cced9.mount: Deactivated successfully. May 17 00:24:17.523539 containerd[1504]: time="2025-05-17T00:24:17.522742638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-555577f7d7-qfnvl,Uid:114d0358-ddcf-4c04-bb03-89411102b031,Namespace:calico-apiserver,Attempt:1,}" May 17 00:24:17.532480 containerd[1504]: 2025-05-17 00:24:17.459 [INFO][4531] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" May 17 00:24:17.532480 containerd[1504]: 2025-05-17 00:24:17.460 [INFO][4531] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" iface="eth0" netns="/var/run/netns/cni-f54259cb-99ec-69f9-67e4-cc664764a174" May 17 00:24:17.532480 containerd[1504]: 2025-05-17 00:24:17.460 [INFO][4531] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" iface="eth0" netns="/var/run/netns/cni-f54259cb-99ec-69f9-67e4-cc664764a174" May 17 00:24:17.532480 containerd[1504]: 2025-05-17 00:24:17.460 [INFO][4531] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" iface="eth0" netns="/var/run/netns/cni-f54259cb-99ec-69f9-67e4-cc664764a174" May 17 00:24:17.532480 containerd[1504]: 2025-05-17 00:24:17.460 [INFO][4531] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" May 17 00:24:17.532480 containerd[1504]: 2025-05-17 00:24:17.460 [INFO][4531] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" May 17 00:24:17.532480 containerd[1504]: 2025-05-17 00:24:17.514 [INFO][4548] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" HandleID="k8s-pod-network.bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--4kjkb-eth0" May 17 00:24:17.532480 containerd[1504]: 2025-05-17 00:24:17.514 [INFO][4548] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:17.532480 containerd[1504]: 2025-05-17 00:24:17.517 [INFO][4548] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:17.532480 containerd[1504]: 2025-05-17 00:24:17.526 [WARNING][4548] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" HandleID="k8s-pod-network.bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--4kjkb-eth0" May 17 00:24:17.532480 containerd[1504]: 2025-05-17 00:24:17.526 [INFO][4548] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" HandleID="k8s-pod-network.bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--4kjkb-eth0" May 17 00:24:17.532480 containerd[1504]: 2025-05-17 00:24:17.528 [INFO][4548] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:17.532480 containerd[1504]: 2025-05-17 00:24:17.530 [INFO][4531] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" May 17 00:24:17.533675 containerd[1504]: time="2025-05-17T00:24:17.533527923Z" level=info msg="TearDown network for sandbox \"bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49\" successfully" May 17 00:24:17.533675 containerd[1504]: time="2025-05-17T00:24:17.533549272Z" level=info msg="StopPodSandbox for \"bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49\" returns successfully" May 17 00:24:17.536905 systemd[1]: run-netns-cni\x2df54259cb\x2d99ec\x2d69f9\x2d67e4\x2dcc664764a174.mount: Deactivated successfully. May 17 00:24:17.539270 containerd[1504]: time="2025-05-17T00:24:17.539143159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-555577f7d7-4kjkb,Uid:0e2dc52f-271c-43c5-9af2-6be78554f3c4,Namespace:calico-apiserver,Attempt:1,}" May 17 00:24:17.648051 systemd-networkd[1393]: calia87ed38347f: Link UP May 17 00:24:17.650274 systemd-networkd[1393]: calia87ed38347f: Gained carrier May 17 00:24:17.666089 containerd[1504]: 2025-05-17 00:24:17.564 [INFO][4563] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--2n42x-eth0 coredns-668d6bf9bc- kube-system 30984a25-4953-4a27-9699-f4c7434a26ed 921 0 2025-05-17 00:23:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-n-82e895e080 coredns-668d6bf9bc-2n42x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia87ed38347f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="95524c3259d7a92f4c340f3adbc97955342dd2dfe7862d9b8c21271938104228" Namespace="kube-system" Pod="coredns-668d6bf9bc-2n42x" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--2n42x-" May 17 00:24:17.666089 containerd[1504]: 2025-05-17 00:24:17.565 [INFO][4563] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="95524c3259d7a92f4c340f3adbc97955342dd2dfe7862d9b8c21271938104228" Namespace="kube-system" Pod="coredns-668d6bf9bc-2n42x" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--2n42x-eth0" May 17 00:24:17.666089 containerd[1504]: 2025-05-17 00:24:17.603 [INFO][4589] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95524c3259d7a92f4c340f3adbc97955342dd2dfe7862d9b8c21271938104228" HandleID="k8s-pod-network.95524c3259d7a92f4c340f3adbc97955342dd2dfe7862d9b8c21271938104228" Workload="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--2n42x-eth0" May 17 00:24:17.666089 containerd[1504]: 2025-05-17 00:24:17.604 [INFO][4589] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="95524c3259d7a92f4c340f3adbc97955342dd2dfe7862d9b8c21271938104228" HandleID="k8s-pod-network.95524c3259d7a92f4c340f3adbc97955342dd2dfe7862d9b8c21271938104228" Workload="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--2n42x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9240), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-n-82e895e080", "pod":"coredns-668d6bf9bc-2n42x", "timestamp":"2025-05-17 00:24:17.603395527 +0000 UTC"}, Hostname:"ci-4081-3-3-n-82e895e080", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:24:17.666089 containerd[1504]: 2025-05-17 00:24:17.604 [INFO][4589] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:17.666089 containerd[1504]: 2025-05-17 00:24:17.604 [INFO][4589] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:17.666089 containerd[1504]: 2025-05-17 00:24:17.604 [INFO][4589] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-82e895e080' May 17 00:24:17.666089 containerd[1504]: 2025-05-17 00:24:17.613 [INFO][4589] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.95524c3259d7a92f4c340f3adbc97955342dd2dfe7862d9b8c21271938104228" host="ci-4081-3-3-n-82e895e080" May 17 00:24:17.666089 containerd[1504]: 2025-05-17 00:24:17.619 [INFO][4589] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-82e895e080" May 17 00:24:17.666089 containerd[1504]: 2025-05-17 00:24:17.625 [INFO][4589] ipam/ipam.go 511: Trying affinity for 192.168.0.0/26 host="ci-4081-3-3-n-82e895e080" May 17 00:24:17.666089 containerd[1504]: 2025-05-17 00:24:17.628 [INFO][4589] ipam/ipam.go 158: Attempting to load block cidr=192.168.0.0/26 host="ci-4081-3-3-n-82e895e080" May 17 00:24:17.666089 containerd[1504]: 2025-05-17 00:24:17.630 [INFO][4589] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.0.0/26 host="ci-4081-3-3-n-82e895e080" May 17 00:24:17.666089 containerd[1504]: 2025-05-17 00:24:17.630 [INFO][4589] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.0.0/26 handle="k8s-pod-network.95524c3259d7a92f4c340f3adbc97955342dd2dfe7862d9b8c21271938104228" host="ci-4081-3-3-n-82e895e080" May 17 00:24:17.666089 containerd[1504]: 2025-05-17 00:24:17.632 [INFO][4589] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.95524c3259d7a92f4c340f3adbc97955342dd2dfe7862d9b8c21271938104228 May 17 00:24:17.666089 containerd[1504]: 2025-05-17 00:24:17.636 [INFO][4589] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.0.0/26 handle="k8s-pod-network.95524c3259d7a92f4c340f3adbc97955342dd2dfe7862d9b8c21271938104228" host="ci-4081-3-3-n-82e895e080" May 17 00:24:17.666089 containerd[1504]: 2025-05-17 00:24:17.641 [INFO][4589] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.0.5/26] block=192.168.0.0/26 handle="k8s-pod-network.95524c3259d7a92f4c340f3adbc97955342dd2dfe7862d9b8c21271938104228" host="ci-4081-3-3-n-82e895e080" May 17 00:24:17.666089 containerd[1504]: 2025-05-17 00:24:17.641 [INFO][4589] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.0.5/26] handle="k8s-pod-network.95524c3259d7a92f4c340f3adbc97955342dd2dfe7862d9b8c21271938104228" host="ci-4081-3-3-n-82e895e080" May 17 00:24:17.666089 containerd[1504]: 2025-05-17 00:24:17.641 [INFO][4589] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:17.666089 containerd[1504]: 2025-05-17 00:24:17.641 [INFO][4589] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.0.5/26] IPv6=[] ContainerID="95524c3259d7a92f4c340f3adbc97955342dd2dfe7862d9b8c21271938104228" HandleID="k8s-pod-network.95524c3259d7a92f4c340f3adbc97955342dd2dfe7862d9b8c21271938104228" Workload="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--2n42x-eth0" May 17 00:24:17.668218 containerd[1504]: 2025-05-17 00:24:17.644 [INFO][4563] cni-plugin/k8s.go 418: Populated endpoint ContainerID="95524c3259d7a92f4c340f3adbc97955342dd2dfe7862d9b8c21271938104228" Namespace="kube-system" Pod="coredns-668d6bf9bc-2n42x" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--2n42x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--2n42x-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"30984a25-4953-4a27-9699-f4c7434a26ed", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-82e895e080", ContainerID:"", Pod:"coredns-668d6bf9bc-2n42x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia87ed38347f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:17.668218 containerd[1504]: 2025-05-17 00:24:17.644 [INFO][4563] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.0.5/32] ContainerID="95524c3259d7a92f4c340f3adbc97955342dd2dfe7862d9b8c21271938104228" Namespace="kube-system" Pod="coredns-668d6bf9bc-2n42x" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--2n42x-eth0" May 17 00:24:17.668218 containerd[1504]: 2025-05-17 00:24:17.644 [INFO][4563] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia87ed38347f ContainerID="95524c3259d7a92f4c340f3adbc97955342dd2dfe7862d9b8c21271938104228" Namespace="kube-system" Pod="coredns-668d6bf9bc-2n42x" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--2n42x-eth0" May 17 00:24:17.668218 containerd[1504]: 2025-05-17 00:24:17.651 [INFO][4563] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95524c3259d7a92f4c340f3adbc97955342dd2dfe7862d9b8c21271938104228" Namespace="kube-system" Pod="coredns-668d6bf9bc-2n42x" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--2n42x-eth0" May 17 00:24:17.668218 containerd[1504]: 2025-05-17 00:24:17.652 [INFO][4563] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="95524c3259d7a92f4c340f3adbc97955342dd2dfe7862d9b8c21271938104228" Namespace="kube-system" Pod="coredns-668d6bf9bc-2n42x" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--2n42x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--2n42x-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"30984a25-4953-4a27-9699-f4c7434a26ed", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-82e895e080", ContainerID:"95524c3259d7a92f4c340f3adbc97955342dd2dfe7862d9b8c21271938104228", Pod:"coredns-668d6bf9bc-2n42x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia87ed38347f", MAC:"2e:95:6d:49:b3:0e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:17.668218 containerd[1504]: 2025-05-17 00:24:17.663 [INFO][4563] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="95524c3259d7a92f4c340f3adbc97955342dd2dfe7862d9b8c21271938104228" Namespace="kube-system" Pod="coredns-668d6bf9bc-2n42x" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--2n42x-eth0" May 17 00:24:17.682602 containerd[1504]: time="2025-05-17T00:24:17.682425570Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:17.682602 containerd[1504]: time="2025-05-17T00:24:17.682462919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:17.682602 containerd[1504]: time="2025-05-17T00:24:17.682481755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:17.682602 containerd[1504]: time="2025-05-17T00:24:17.682544030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:17.695366 systemd[1]: Started cri-containerd-95524c3259d7a92f4c340f3adbc97955342dd2dfe7862d9b8c21271938104228.scope - libcontainer container 95524c3259d7a92f4c340f3adbc97955342dd2dfe7862d9b8c21271938104228. May 17 00:24:17.735740 containerd[1504]: time="2025-05-17T00:24:17.735704473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2n42x,Uid:30984a25-4953-4a27-9699-f4c7434a26ed,Namespace:kube-system,Attempt:1,} returns sandbox id \"95524c3259d7a92f4c340f3adbc97955342dd2dfe7862d9b8c21271938104228\"" May 17 00:24:17.741342 containerd[1504]: time="2025-05-17T00:24:17.741274205Z" level=info msg="CreateContainer within sandbox \"95524c3259d7a92f4c340f3adbc97955342dd2dfe7862d9b8c21271938104228\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:24:17.756156 systemd-networkd[1393]: caliadb4e9e3a02: Link UP May 17 00:24:17.758122 systemd-networkd[1393]: caliadb4e9e3a02: Gained carrier May 17 00:24:17.765017 containerd[1504]: time="2025-05-17T00:24:17.764902007Z" level=info msg="CreateContainer within sandbox \"95524c3259d7a92f4c340f3adbc97955342dd2dfe7862d9b8c21271938104228\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9cfc384969b4687622330db6b96a50fd6226640430e3061984f3adcce0bec84d\"" May 17 00:24:17.766571 containerd[1504]: time="2025-05-17T00:24:17.766552253Z" level=info msg="StartContainer for \"9cfc384969b4687622330db6b96a50fd6226640430e3061984f3adcce0bec84d\"" May 17 00:24:17.776349 containerd[1504]: 2025-05-17 00:24:17.590 [INFO][4572] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--qfnvl-eth0 calico-apiserver-555577f7d7- calico-apiserver 114d0358-ddcf-4c04-bb03-89411102b031 919 0 2025-05-17 00:23:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:555577f7d7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-n-82e895e080 calico-apiserver-555577f7d7-qfnvl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliadb4e9e3a02 [] [] }} ContainerID="6b4ea0b1e2d83ab6f034997ccbd33adb5893d022eecc93186d29eb2984f371f3" Namespace="calico-apiserver" Pod="calico-apiserver-555577f7d7-qfnvl" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--qfnvl-" May 17 00:24:17.776349 containerd[1504]: 2025-05-17 00:24:17.590 [INFO][4572] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6b4ea0b1e2d83ab6f034997ccbd33adb5893d022eecc93186d29eb2984f371f3" Namespace="calico-apiserver" Pod="calico-apiserver-555577f7d7-qfnvl" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--qfnvl-eth0" May 17 00:24:17.776349 containerd[1504]: 2025-05-17 00:24:17.628 [INFO][4604] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b4ea0b1e2d83ab6f034997ccbd33adb5893d022eecc93186d29eb2984f371f3" HandleID="k8s-pod-network.6b4ea0b1e2d83ab6f034997ccbd33adb5893d022eecc93186d29eb2984f371f3" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--qfnvl-eth0" May 17 00:24:17.776349 containerd[1504]: 2025-05-17 00:24:17.628 [INFO][4604] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6b4ea0b1e2d83ab6f034997ccbd33adb5893d022eecc93186d29eb2984f371f3" HandleID="k8s-pod-network.6b4ea0b1e2d83ab6f034997ccbd33adb5893d022eecc93186d29eb2984f371f3" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--qfnvl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-n-82e895e080", "pod":"calico-apiserver-555577f7d7-qfnvl", "timestamp":"2025-05-17 00:24:17.628770957 +0000 UTC"}, Hostname:"ci-4081-3-3-n-82e895e080", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:24:17.776349 containerd[1504]: 2025-05-17 00:24:17.629 [INFO][4604] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:17.776349 containerd[1504]: 2025-05-17 00:24:17.641 [INFO][4604] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:17.776349 containerd[1504]: 2025-05-17 00:24:17.642 [INFO][4604] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-82e895e080' May 17 00:24:17.776349 containerd[1504]: 2025-05-17 00:24:17.715 [INFO][4604] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6b4ea0b1e2d83ab6f034997ccbd33adb5893d022eecc93186d29eb2984f371f3" host="ci-4081-3-3-n-82e895e080" May 17 00:24:17.776349 containerd[1504]: 2025-05-17 00:24:17.719 [INFO][4604] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-82e895e080" May 17 00:24:17.776349 containerd[1504]: 2025-05-17 00:24:17.726 [INFO][4604] ipam/ipam.go 511: Trying affinity for 192.168.0.0/26 host="ci-4081-3-3-n-82e895e080" May 17 00:24:17.776349 containerd[1504]: 2025-05-17 00:24:17.728 [INFO][4604] ipam/ipam.go 158: Attempting to load block cidr=192.168.0.0/26 host="ci-4081-3-3-n-82e895e080" May 17 00:24:17.776349 containerd[1504]: 2025-05-17 00:24:17.731 [INFO][4604] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.0.0/26 host="ci-4081-3-3-n-82e895e080" May 17 00:24:17.776349 containerd[1504]: 2025-05-17 00:24:17.731 [INFO][4604] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.0.0/26 handle="k8s-pod-network.6b4ea0b1e2d83ab6f034997ccbd33adb5893d022eecc93186d29eb2984f371f3" host="ci-4081-3-3-n-82e895e080" May 17 00:24:17.776349 containerd[1504]: 2025-05-17 00:24:17.733 [INFO][4604] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6b4ea0b1e2d83ab6f034997ccbd33adb5893d022eecc93186d29eb2984f371f3 May 17 00:24:17.776349 containerd[1504]: 2025-05-17 00:24:17.737 [INFO][4604] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.0.0/26 handle="k8s-pod-network.6b4ea0b1e2d83ab6f034997ccbd33adb5893d022eecc93186d29eb2984f371f3" host="ci-4081-3-3-n-82e895e080" May 17 00:24:17.776349 containerd[1504]: 2025-05-17 00:24:17.744 [INFO][4604] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.0.6/26] block=192.168.0.0/26 handle="k8s-pod-network.6b4ea0b1e2d83ab6f034997ccbd33adb5893d022eecc93186d29eb2984f371f3" host="ci-4081-3-3-n-82e895e080" May 17 00:24:17.776349 containerd[1504]: 2025-05-17 00:24:17.744 [INFO][4604] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.0.6/26] handle="k8s-pod-network.6b4ea0b1e2d83ab6f034997ccbd33adb5893d022eecc93186d29eb2984f371f3" host="ci-4081-3-3-n-82e895e080" May 17 00:24:17.776349 containerd[1504]: 2025-05-17 00:24:17.744 [INFO][4604] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:17.776349 containerd[1504]: 2025-05-17 00:24:17.744 [INFO][4604] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.0.6/26] IPv6=[] ContainerID="6b4ea0b1e2d83ab6f034997ccbd33adb5893d022eecc93186d29eb2984f371f3" HandleID="k8s-pod-network.6b4ea0b1e2d83ab6f034997ccbd33adb5893d022eecc93186d29eb2984f371f3" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--qfnvl-eth0" May 17 00:24:17.776900 containerd[1504]: 2025-05-17 00:24:17.751 [INFO][4572] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6b4ea0b1e2d83ab6f034997ccbd33adb5893d022eecc93186d29eb2984f371f3" Namespace="calico-apiserver" Pod="calico-apiserver-555577f7d7-qfnvl" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--qfnvl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--qfnvl-eth0", GenerateName:"calico-apiserver-555577f7d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"114d0358-ddcf-4c04-bb03-89411102b031", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"555577f7d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-82e895e080", ContainerID:"", Pod:"calico-apiserver-555577f7d7-qfnvl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliadb4e9e3a02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:17.776900 containerd[1504]: 2025-05-17 00:24:17.751 [INFO][4572] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.0.6/32] ContainerID="6b4ea0b1e2d83ab6f034997ccbd33adb5893d022eecc93186d29eb2984f371f3" Namespace="calico-apiserver" Pod="calico-apiserver-555577f7d7-qfnvl" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--qfnvl-eth0" May 17 00:24:17.776900 containerd[1504]: 2025-05-17 00:24:17.751 [INFO][4572] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliadb4e9e3a02 ContainerID="6b4ea0b1e2d83ab6f034997ccbd33adb5893d022eecc93186d29eb2984f371f3" Namespace="calico-apiserver" Pod="calico-apiserver-555577f7d7-qfnvl" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--qfnvl-eth0" May 17 00:24:17.776900 containerd[1504]: 2025-05-17 00:24:17.759 [INFO][4572] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b4ea0b1e2d83ab6f034997ccbd33adb5893d022eecc93186d29eb2984f371f3" Namespace="calico-apiserver" Pod="calico-apiserver-555577f7d7-qfnvl" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--qfnvl-eth0" May 17 00:24:17.776900 containerd[1504]: 2025-05-17 00:24:17.760 [INFO][4572] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6b4ea0b1e2d83ab6f034997ccbd33adb5893d022eecc93186d29eb2984f371f3" Namespace="calico-apiserver" Pod="calico-apiserver-555577f7d7-qfnvl" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--qfnvl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--qfnvl-eth0", GenerateName:"calico-apiserver-555577f7d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"114d0358-ddcf-4c04-bb03-89411102b031", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"555577f7d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-82e895e080", ContainerID:"6b4ea0b1e2d83ab6f034997ccbd33adb5893d022eecc93186d29eb2984f371f3", Pod:"calico-apiserver-555577f7d7-qfnvl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliadb4e9e3a02", MAC:"32:53:9e:b4:2f:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:17.776900 containerd[1504]: 2025-05-17 00:24:17.774 [INFO][4572] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6b4ea0b1e2d83ab6f034997ccbd33adb5893d022eecc93186d29eb2984f371f3" Namespace="calico-apiserver" Pod="calico-apiserver-555577f7d7-qfnvl" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--qfnvl-eth0" May 17 00:24:17.802421 systemd[1]: Started cri-containerd-9cfc384969b4687622330db6b96a50fd6226640430e3061984f3adcce0bec84d.scope - libcontainer container 9cfc384969b4687622330db6b96a50fd6226640430e3061984f3adcce0bec84d. May 17 00:24:17.821280 containerd[1504]: time="2025-05-17T00:24:17.820559103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:17.821280 containerd[1504]: time="2025-05-17T00:24:17.820610650Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:17.821280 containerd[1504]: time="2025-05-17T00:24:17.820624124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:17.821280 containerd[1504]: time="2025-05-17T00:24:17.820696419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:17.848399 systemd[1]: Started cri-containerd-6b4ea0b1e2d83ab6f034997ccbd33adb5893d022eecc93186d29eb2984f371f3.scope - libcontainer container 6b4ea0b1e2d83ab6f034997ccbd33adb5893d022eecc93186d29eb2984f371f3. May 17 00:24:17.852218 containerd[1504]: time="2025-05-17T00:24:17.852159847Z" level=info msg="StartContainer for \"9cfc384969b4687622330db6b96a50fd6226640430e3061984f3adcce0bec84d\" returns successfully" May 17 00:24:17.863168 systemd-networkd[1393]: calia7a84f39ea0: Link UP May 17 00:24:17.863816 systemd-networkd[1393]: calia7a84f39ea0: Gained carrier May 17 00:24:17.882789 containerd[1504]: 2025-05-17 00:24:17.609 [INFO][4587] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--4kjkb-eth0 calico-apiserver-555577f7d7- calico-apiserver 0e2dc52f-271c-43c5-9af2-6be78554f3c4 920 0 2025-05-17 00:23:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:555577f7d7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-n-82e895e080 calico-apiserver-555577f7d7-4kjkb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia7a84f39ea0 [] [] }} ContainerID="01b0dfe63760acab702566e82d27b19af8ac628299fe4f267ac25893e170ac77" Namespace="calico-apiserver" Pod="calico-apiserver-555577f7d7-4kjkb" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--4kjkb-" May 17 00:24:17.882789 containerd[1504]: 2025-05-17 00:24:17.609 [INFO][4587] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="01b0dfe63760acab702566e82d27b19af8ac628299fe4f267ac25893e170ac77" Namespace="calico-apiserver" Pod="calico-apiserver-555577f7d7-4kjkb" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--4kjkb-eth0" May 17 00:24:17.882789 containerd[1504]: 2025-05-17 00:24:17.641 [INFO][4612] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01b0dfe63760acab702566e82d27b19af8ac628299fe4f267ac25893e170ac77" HandleID="k8s-pod-network.01b0dfe63760acab702566e82d27b19af8ac628299fe4f267ac25893e170ac77" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--4kjkb-eth0" May 17 00:24:17.882789 containerd[1504]: 2025-05-17 00:24:17.641 [INFO][4612] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="01b0dfe63760acab702566e82d27b19af8ac628299fe4f267ac25893e170ac77" HandleID="k8s-pod-network.01b0dfe63760acab702566e82d27b19af8ac628299fe4f267ac25893e170ac77" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--4kjkb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d8eb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-n-82e895e080", "pod":"calico-apiserver-555577f7d7-4kjkb", "timestamp":"2025-05-17 00:24:17.641069252 +0000 UTC"}, Hostname:"ci-4081-3-3-n-82e895e080", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:24:17.882789 containerd[1504]: 2025-05-17 00:24:17.641 [INFO][4612] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:17.882789 containerd[1504]: 2025-05-17 00:24:17.744 [INFO][4612] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:17.882789 containerd[1504]: 2025-05-17 00:24:17.744 [INFO][4612] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-82e895e080' May 17 00:24:17.882789 containerd[1504]: 2025-05-17 00:24:17.815 [INFO][4612] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.01b0dfe63760acab702566e82d27b19af8ac628299fe4f267ac25893e170ac77" host="ci-4081-3-3-n-82e895e080" May 17 00:24:17.882789 containerd[1504]: 2025-05-17 00:24:17.821 [INFO][4612] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-82e895e080" May 17 00:24:17.882789 containerd[1504]: 2025-05-17 00:24:17.827 [INFO][4612] ipam/ipam.go 511: Trying affinity for 192.168.0.0/26 host="ci-4081-3-3-n-82e895e080" May 17 00:24:17.882789 containerd[1504]: 2025-05-17 00:24:17.828 [INFO][4612] ipam/ipam.go 158: Attempting to load block cidr=192.168.0.0/26 host="ci-4081-3-3-n-82e895e080" May 17 00:24:17.882789 containerd[1504]: 2025-05-17 00:24:17.831 [INFO][4612] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.0.0/26 host="ci-4081-3-3-n-82e895e080" May 17 00:24:17.882789 containerd[1504]: 2025-05-17 00:24:17.831 [INFO][4612] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.0.0/26 handle="k8s-pod-network.01b0dfe63760acab702566e82d27b19af8ac628299fe4f267ac25893e170ac77" host="ci-4081-3-3-n-82e895e080" May 17 00:24:17.882789 containerd[1504]: 2025-05-17 00:24:17.833 [INFO][4612] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.01b0dfe63760acab702566e82d27b19af8ac628299fe4f267ac25893e170ac77 May 17 00:24:17.882789 containerd[1504]: 2025-05-17 00:24:17.841 [INFO][4612] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.0.0/26 handle="k8s-pod-network.01b0dfe63760acab702566e82d27b19af8ac628299fe4f267ac25893e170ac77" host="ci-4081-3-3-n-82e895e080" May 17 00:24:17.882789 containerd[1504]: 2025-05-17 00:24:17.852 [INFO][4612] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.0.7/26] block=192.168.0.0/26 handle="k8s-pod-network.01b0dfe63760acab702566e82d27b19af8ac628299fe4f267ac25893e170ac77" host="ci-4081-3-3-n-82e895e080" May 17 00:24:17.882789 containerd[1504]: 2025-05-17 00:24:17.852 [INFO][4612] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.0.7/26] handle="k8s-pod-network.01b0dfe63760acab702566e82d27b19af8ac628299fe4f267ac25893e170ac77" host="ci-4081-3-3-n-82e895e080" May 17 00:24:17.882789 containerd[1504]: 2025-05-17 00:24:17.852 [INFO][4612] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:17.882789 containerd[1504]: 2025-05-17 00:24:17.852 [INFO][4612] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.0.7/26] IPv6=[] ContainerID="01b0dfe63760acab702566e82d27b19af8ac628299fe4f267ac25893e170ac77" HandleID="k8s-pod-network.01b0dfe63760acab702566e82d27b19af8ac628299fe4f267ac25893e170ac77" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--4kjkb-eth0" May 17 00:24:17.884274 containerd[1504]: 2025-05-17 00:24:17.858 [INFO][4587] cni-plugin/k8s.go 418: Populated endpoint ContainerID="01b0dfe63760acab702566e82d27b19af8ac628299fe4f267ac25893e170ac77" Namespace="calico-apiserver" Pod="calico-apiserver-555577f7d7-4kjkb" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--4kjkb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--4kjkb-eth0", GenerateName:"calico-apiserver-555577f7d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"0e2dc52f-271c-43c5-9af2-6be78554f3c4", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"555577f7d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-82e895e080", ContainerID:"", Pod:"calico-apiserver-555577f7d7-4kjkb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia7a84f39ea0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:17.884274 containerd[1504]: 2025-05-17 00:24:17.858 [INFO][4587] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.0.7/32] ContainerID="01b0dfe63760acab702566e82d27b19af8ac628299fe4f267ac25893e170ac77" Namespace="calico-apiserver" Pod="calico-apiserver-555577f7d7-4kjkb" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--4kjkb-eth0" May 17 00:24:17.884274 containerd[1504]: 2025-05-17 00:24:17.858 [INFO][4587] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia7a84f39ea0 ContainerID="01b0dfe63760acab702566e82d27b19af8ac628299fe4f267ac25893e170ac77" Namespace="calico-apiserver" Pod="calico-apiserver-555577f7d7-4kjkb" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--4kjkb-eth0" May 17 00:24:17.884274 containerd[1504]: 2025-05-17 00:24:17.865 [INFO][4587] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01b0dfe63760acab702566e82d27b19af8ac628299fe4f267ac25893e170ac77" Namespace="calico-apiserver" Pod="calico-apiserver-555577f7d7-4kjkb" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--4kjkb-eth0" May 17 00:24:17.884274 containerd[1504]: 2025-05-17 00:24:17.868 [INFO][4587] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="01b0dfe63760acab702566e82d27b19af8ac628299fe4f267ac25893e170ac77" Namespace="calico-apiserver" Pod="calico-apiserver-555577f7d7-4kjkb" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--4kjkb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--4kjkb-eth0", GenerateName:"calico-apiserver-555577f7d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"0e2dc52f-271c-43c5-9af2-6be78554f3c4", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"555577f7d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-82e895e080", ContainerID:"01b0dfe63760acab702566e82d27b19af8ac628299fe4f267ac25893e170ac77", Pod:"calico-apiserver-555577f7d7-4kjkb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia7a84f39ea0", MAC:"9e:13:cd:66:c9:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:17.884274 containerd[1504]: 2025-05-17 00:24:17.879 [INFO][4587] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="01b0dfe63760acab702566e82d27b19af8ac628299fe4f267ac25893e170ac77" Namespace="calico-apiserver" Pod="calico-apiserver-555577f7d7-4kjkb" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--4kjkb-eth0" May 17 00:24:17.925773 containerd[1504]: time="2025-05-17T00:24:17.925602214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-555577f7d7-qfnvl,Uid:114d0358-ddcf-4c04-bb03-89411102b031,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6b4ea0b1e2d83ab6f034997ccbd33adb5893d022eecc93186d29eb2984f371f3\"" May 17 00:24:17.926837 containerd[1504]: time="2025-05-17T00:24:17.925211156Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:17.926837 containerd[1504]: time="2025-05-17T00:24:17.925713152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:17.926837 containerd[1504]: time="2025-05-17T00:24:17.925724623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:17.926837 containerd[1504]: time="2025-05-17T00:24:17.926689842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:17.948692 systemd[1]: Started cri-containerd-01b0dfe63760acab702566e82d27b19af8ac628299fe4f267ac25893e170ac77.scope - libcontainer container 01b0dfe63760acab702566e82d27b19af8ac628299fe4f267ac25893e170ac77. May 17 00:24:18.005659 containerd[1504]: time="2025-05-17T00:24:18.005610823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-555577f7d7-4kjkb,Uid:0e2dc52f-271c-43c5-9af2-6be78554f3c4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"01b0dfe63760acab702566e82d27b19af8ac628299fe4f267ac25893e170ac77\"" May 17 00:24:18.379040 containerd[1504]: time="2025-05-17T00:24:18.378607340Z" level=info msg="StopPodSandbox for \"8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e\"" May 17 00:24:18.478039 containerd[1504]: 2025-05-17 00:24:18.447 [INFO][4821] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" May 17 00:24:18.478039 containerd[1504]: 2025-05-17 00:24:18.447 [INFO][4821] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" iface="eth0" netns="/var/run/netns/cni-5b6f6703-1a5e-b429-088b-f34f1cf5d5db" May 17 00:24:18.478039 containerd[1504]: 2025-05-17 00:24:18.448 [INFO][4821] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" iface="eth0" netns="/var/run/netns/cni-5b6f6703-1a5e-b429-088b-f34f1cf5d5db" May 17 00:24:18.478039 containerd[1504]: 2025-05-17 00:24:18.450 [INFO][4821] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" iface="eth0" netns="/var/run/netns/cni-5b6f6703-1a5e-b429-088b-f34f1cf5d5db" May 17 00:24:18.478039 containerd[1504]: 2025-05-17 00:24:18.450 [INFO][4821] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" May 17 00:24:18.478039 containerd[1504]: 2025-05-17 00:24:18.450 [INFO][4821] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" May 17 00:24:18.478039 containerd[1504]: 2025-05-17 00:24:18.468 [INFO][4829] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" HandleID="k8s-pod-network.8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" Workload="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--8bdt9-eth0" May 17 00:24:18.478039 containerd[1504]: 2025-05-17 00:24:18.468 [INFO][4829] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:18.478039 containerd[1504]: 2025-05-17 00:24:18.468 [INFO][4829] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:18.478039 containerd[1504]: 2025-05-17 00:24:18.473 [WARNING][4829] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" HandleID="k8s-pod-network.8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" Workload="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--8bdt9-eth0" May 17 00:24:18.478039 containerd[1504]: 2025-05-17 00:24:18.473 [INFO][4829] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" HandleID="k8s-pod-network.8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" Workload="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--8bdt9-eth0" May 17 00:24:18.478039 containerd[1504]: 2025-05-17 00:24:18.474 [INFO][4829] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:18.478039 containerd[1504]: 2025-05-17 00:24:18.476 [INFO][4821] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" May 17 00:24:18.480054 containerd[1504]: time="2025-05-17T00:24:18.478153780Z" level=info msg="TearDown network for sandbox \"8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e\" successfully" May 17 00:24:18.480054 containerd[1504]: time="2025-05-17T00:24:18.478178455Z" level=info msg="StopPodSandbox for \"8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e\" returns successfully" May 17 00:24:18.480054 containerd[1504]: time="2025-05-17T00:24:18.479021818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8bdt9,Uid:141e29e6-7c60-4ef0-8843-86313045c72f,Namespace:kube-system,Attempt:1,}" May 17 00:24:18.512320 systemd[1]: run-netns-cni\x2d5b6f6703\x2d1a5e\x2db429\x2d088b\x2df34f1cf5d5db.mount: Deactivated successfully. May 17 00:24:18.596616 systemd-networkd[1393]: cali4e87e947f89: Link UP May 17 00:24:18.597411 systemd-networkd[1393]: cali4e87e947f89: Gained carrier May 17 00:24:18.616809 containerd[1504]: 2025-05-17 00:24:18.530 [INFO][4837] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--8bdt9-eth0 coredns-668d6bf9bc- kube-system 141e29e6-7c60-4ef0-8843-86313045c72f 941 0 2025-05-17 00:23:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-n-82e895e080 coredns-668d6bf9bc-8bdt9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4e87e947f89 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c82e6dc0b33104df2b7d80ce38e7b06ef89169d982caacc51d4928fa09bee745" Namespace="kube-system" Pod="coredns-668d6bf9bc-8bdt9" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--8bdt9-" May 17 00:24:18.616809 containerd[1504]: 2025-05-17 00:24:18.530 [INFO][4837] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c82e6dc0b33104df2b7d80ce38e7b06ef89169d982caacc51d4928fa09bee745" Namespace="kube-system" Pod="coredns-668d6bf9bc-8bdt9" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--8bdt9-eth0" May 17 00:24:18.616809 containerd[1504]: 2025-05-17 00:24:18.555 [INFO][4848] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c82e6dc0b33104df2b7d80ce38e7b06ef89169d982caacc51d4928fa09bee745" HandleID="k8s-pod-network.c82e6dc0b33104df2b7d80ce38e7b06ef89169d982caacc51d4928fa09bee745" Workload="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--8bdt9-eth0" May 17 00:24:18.616809 containerd[1504]: 2025-05-17 00:24:18.555 [INFO][4848] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c82e6dc0b33104df2b7d80ce38e7b06ef89169d982caacc51d4928fa09bee745" HandleID="k8s-pod-network.c82e6dc0b33104df2b7d80ce38e7b06ef89169d982caacc51d4928fa09bee745" Workload="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--8bdt9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d3240), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-n-82e895e080", "pod":"coredns-668d6bf9bc-8bdt9", "timestamp":"2025-05-17 00:24:18.555215982 +0000 UTC"}, Hostname:"ci-4081-3-3-n-82e895e080", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:24:18.616809 containerd[1504]: 2025-05-17 00:24:18.555 [INFO][4848] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:18.616809 containerd[1504]: 2025-05-17 00:24:18.555 [INFO][4848] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:18.616809 containerd[1504]: 2025-05-17 00:24:18.555 [INFO][4848] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-82e895e080' May 17 00:24:18.616809 containerd[1504]: 2025-05-17 00:24:18.563 [INFO][4848] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c82e6dc0b33104df2b7d80ce38e7b06ef89169d982caacc51d4928fa09bee745" host="ci-4081-3-3-n-82e895e080" May 17 00:24:18.616809 containerd[1504]: 2025-05-17 00:24:18.568 [INFO][4848] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-82e895e080" May 17 00:24:18.616809 containerd[1504]: 2025-05-17 00:24:18.574 [INFO][4848] ipam/ipam.go 511: Trying affinity for 192.168.0.0/26 host="ci-4081-3-3-n-82e895e080" May 17 00:24:18.616809 containerd[1504]: 2025-05-17 00:24:18.576 [INFO][4848] ipam/ipam.go 158: Attempting to load block cidr=192.168.0.0/26 host="ci-4081-3-3-n-82e895e080" May 17 00:24:18.616809 containerd[1504]: 2025-05-17 00:24:18.578 [INFO][4848] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.0.0/26 host="ci-4081-3-3-n-82e895e080" May 17 00:24:18.616809 containerd[1504]: 2025-05-17 00:24:18.578 [INFO][4848] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.0.0/26 handle="k8s-pod-network.c82e6dc0b33104df2b7d80ce38e7b06ef89169d982caacc51d4928fa09bee745" host="ci-4081-3-3-n-82e895e080" May 17 00:24:18.616809 containerd[1504]: 2025-05-17 00:24:18.579 [INFO][4848] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c82e6dc0b33104df2b7d80ce38e7b06ef89169d982caacc51d4928fa09bee745 May 17 00:24:18.616809 containerd[1504]: 2025-05-17 00:24:18.584 [INFO][4848] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.0.0/26 handle="k8s-pod-network.c82e6dc0b33104df2b7d80ce38e7b06ef89169d982caacc51d4928fa09bee745" host="ci-4081-3-3-n-82e895e080" May 17 00:24:18.616809 containerd[1504]: 2025-05-17 00:24:18.591 [INFO][4848] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.0.8/26] block=192.168.0.0/26 handle="k8s-pod-network.c82e6dc0b33104df2b7d80ce38e7b06ef89169d982caacc51d4928fa09bee745" host="ci-4081-3-3-n-82e895e080" May 17 00:24:18.616809 containerd[1504]: 2025-05-17 00:24:18.591 [INFO][4848] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.0.8/26] handle="k8s-pod-network.c82e6dc0b33104df2b7d80ce38e7b06ef89169d982caacc51d4928fa09bee745" host="ci-4081-3-3-n-82e895e080" May 17 00:24:18.616809 containerd[1504]: 2025-05-17 00:24:18.591 [INFO][4848] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:18.616809 containerd[1504]: 2025-05-17 00:24:18.591 [INFO][4848] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.0.8/26] IPv6=[] ContainerID="c82e6dc0b33104df2b7d80ce38e7b06ef89169d982caacc51d4928fa09bee745" HandleID="k8s-pod-network.c82e6dc0b33104df2b7d80ce38e7b06ef89169d982caacc51d4928fa09bee745" Workload="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--8bdt9-eth0" May 17 00:24:18.618108 containerd[1504]: 2025-05-17 00:24:18.594 [INFO][4837] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c82e6dc0b33104df2b7d80ce38e7b06ef89169d982caacc51d4928fa09bee745" Namespace="kube-system" Pod="coredns-668d6bf9bc-8bdt9" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--8bdt9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--8bdt9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"141e29e6-7c60-4ef0-8843-86313045c72f", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-82e895e080", ContainerID:"", Pod:"coredns-668d6bf9bc-8bdt9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e87e947f89", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:18.618108 containerd[1504]: 2025-05-17 00:24:18.594 [INFO][4837] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.0.8/32] ContainerID="c82e6dc0b33104df2b7d80ce38e7b06ef89169d982caacc51d4928fa09bee745" Namespace="kube-system" Pod="coredns-668d6bf9bc-8bdt9" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--8bdt9-eth0" May 17 00:24:18.618108 containerd[1504]: 2025-05-17 00:24:18.594 [INFO][4837] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e87e947f89 ContainerID="c82e6dc0b33104df2b7d80ce38e7b06ef89169d982caacc51d4928fa09bee745" Namespace="kube-system" Pod="coredns-668d6bf9bc-8bdt9" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--8bdt9-eth0" May 17 00:24:18.618108 containerd[1504]: 2025-05-17 00:24:18.597 [INFO][4837] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c82e6dc0b33104df2b7d80ce38e7b06ef89169d982caacc51d4928fa09bee745" Namespace="kube-system" Pod="coredns-668d6bf9bc-8bdt9" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--8bdt9-eth0" May 17 00:24:18.618108 containerd[1504]: 2025-05-17 00:24:18.598 [INFO][4837] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c82e6dc0b33104df2b7d80ce38e7b06ef89169d982caacc51d4928fa09bee745" Namespace="kube-system" Pod="coredns-668d6bf9bc-8bdt9" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--8bdt9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--8bdt9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"141e29e6-7c60-4ef0-8843-86313045c72f", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-82e895e080", ContainerID:"c82e6dc0b33104df2b7d80ce38e7b06ef89169d982caacc51d4928fa09bee745", Pod:"coredns-668d6bf9bc-8bdt9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e87e947f89", MAC:"12:33:ee:ac:01:ea", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:18.618108 containerd[1504]: 2025-05-17 00:24:18.610 [INFO][4837] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c82e6dc0b33104df2b7d80ce38e7b06ef89169d982caacc51d4928fa09bee745" Namespace="kube-system" Pod="coredns-668d6bf9bc-8bdt9" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--8bdt9-eth0" May 17 00:24:18.645403 containerd[1504]: time="2025-05-17T00:24:18.643685372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:18.646518 containerd[1504]: time="2025-05-17T00:24:18.646358585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:18.646518 containerd[1504]: time="2025-05-17T00:24:18.646377791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:18.646518 containerd[1504]: time="2025-05-17T00:24:18.646444115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:18.667719 kubelet[2693]: I0517 00:24:18.667116 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2n42x" podStartSLOduration=37.664611903 podStartE2EDuration="37.664611903s" podCreationTimestamp="2025-05-17 00:23:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:24:18.649733947 +0000 UTC m=+44.383285830" watchObservedRunningTime="2025-05-17 00:24:18.664611903 +0000 UTC m=+44.398163786" May 17 00:24:18.692665 systemd[1]: Started cri-containerd-c82e6dc0b33104df2b7d80ce38e7b06ef89169d982caacc51d4928fa09bee745.scope - libcontainer container c82e6dc0b33104df2b7d80ce38e7b06ef89169d982caacc51d4928fa09bee745. May 17 00:24:18.756854 containerd[1504]: time="2025-05-17T00:24:18.756782783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8bdt9,Uid:141e29e6-7c60-4ef0-8843-86313045c72f,Namespace:kube-system,Attempt:1,} returns sandbox id \"c82e6dc0b33104df2b7d80ce38e7b06ef89169d982caacc51d4928fa09bee745\"" May 17 00:24:18.760875 containerd[1504]: time="2025-05-17T00:24:18.760822514Z" level=info msg="CreateContainer within sandbox \"c82e6dc0b33104df2b7d80ce38e7b06ef89169d982caacc51d4928fa09bee745\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:24:18.774731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1301957259.mount: Deactivated successfully. May 17 00:24:18.777549 containerd[1504]: time="2025-05-17T00:24:18.777211547Z" level=info msg="CreateContainer within sandbox \"c82e6dc0b33104df2b7d80ce38e7b06ef89169d982caacc51d4928fa09bee745\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7681e4f5d182d04373aa8d10f52c62b37b8711d0ada0881851e4f21da1316f35\"" May 17 00:24:18.780304 containerd[1504]: time="2025-05-17T00:24:18.779028044Z" level=info msg="StartContainer for \"7681e4f5d182d04373aa8d10f52c62b37b8711d0ada0881851e4f21da1316f35\"" May 17 00:24:18.810406 systemd[1]: Started cri-containerd-7681e4f5d182d04373aa8d10f52c62b37b8711d0ada0881851e4f21da1316f35.scope - libcontainer container 7681e4f5d182d04373aa8d10f52c62b37b8711d0ada0881851e4f21da1316f35. May 17 00:24:18.831779 containerd[1504]: time="2025-05-17T00:24:18.831745763Z" level=info msg="StartContainer for \"7681e4f5d182d04373aa8d10f52c62b37b8711d0ada0881851e4f21da1316f35\" returns successfully" May 17 00:24:18.946079 kubelet[2693]: I0517 00:24:18.945507 2693 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:24:19.335828 systemd-networkd[1393]: calia7a84f39ea0: Gained IPv6LL May 17 00:24:19.400170 systemd-networkd[1393]: caliadb4e9e3a02: Gained IPv6LL May 17 00:24:19.656051 systemd-networkd[1393]: calia87ed38347f: Gained IPv6LL May 17 00:24:19.674167 kubelet[2693]: I0517 00:24:19.673774 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8bdt9" podStartSLOduration=38.673750078 podStartE2EDuration="38.673750078s" podCreationTimestamp="2025-05-17 00:23:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:24:19.655528815 +0000 UTC m=+45.389080718" watchObservedRunningTime="2025-05-17 00:24:19.673750078 +0000 UTC m=+45.407301981" May 17 00:24:19.827758 containerd[1504]: time="2025-05-17T00:24:19.827699923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:19.828600 containerd[1504]: time="2025-05-17T00:24:19.828568803Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 17 00:24:19.829651 containerd[1504]: time="2025-05-17T00:24:19.829598493Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:19.831199 containerd[1504]: time="2025-05-17T00:24:19.831179721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:19.831739 containerd[1504]: time="2025-05-17T00:24:19.831609403Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 4.060181578s" May 17 00:24:19.831739 containerd[1504]: time="2025-05-17T00:24:19.831635681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 17 00:24:19.836285 containerd[1504]: time="2025-05-17T00:24:19.836256447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 00:24:19.848601 containerd[1504]: time="2025-05-17T00:24:19.848076304Z" level=info msg="CreateContainer within sandbox \"dcacecad2cec7ac134a2d170ef4a79f8ce615eaeeb04c050333b33d41d06358b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 17 00:24:19.866303 containerd[1504]: time="2025-05-17T00:24:19.866267221Z" level=info msg="CreateContainer within sandbox \"dcacecad2cec7ac134a2d170ef4a79f8ce615eaeeb04c050333b33d41d06358b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0a71b78f07749fbb624b6f0825e47d4fd2314cbc9826916740747daf62630ec9\"" May 17 00:24:19.867003 containerd[1504]: time="2025-05-17T00:24:19.866983035Z" level=info msg="StartContainer for \"0a71b78f07749fbb624b6f0825e47d4fd2314cbc9826916740747daf62630ec9\"" May 17 00:24:19.907354 systemd[1]: Started cri-containerd-0a71b78f07749fbb624b6f0825e47d4fd2314cbc9826916740747daf62630ec9.scope - libcontainer container 0a71b78f07749fbb624b6f0825e47d4fd2314cbc9826916740747daf62630ec9. May 17 00:24:19.943260 containerd[1504]: time="2025-05-17T00:24:19.943047295Z" level=info msg="StartContainer for \"0a71b78f07749fbb624b6f0825e47d4fd2314cbc9826916740747daf62630ec9\" returns successfully" May 17 00:24:20.615407 systemd-networkd[1393]: cali4e87e947f89: Gained IPv6LL May 17 00:24:20.671218 kubelet[2693]: I0517 00:24:20.670511 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8658f94dbd-xvh74" podStartSLOduration=23.605129725 podStartE2EDuration="27.670491681s" podCreationTimestamp="2025-05-17 00:23:53 +0000 UTC" firstStartedPulling="2025-05-17 00:24:15.770787742 +0000 UTC m=+41.504339625" lastFinishedPulling="2025-05-17 00:24:19.836149697 +0000 UTC m=+45.569701581" observedRunningTime="2025-05-17 00:24:20.667760007 +0000 UTC m=+46.401311910" watchObservedRunningTime="2025-05-17 00:24:20.670491681 +0000 UTC m=+46.404043584" May 17 00:24:21.868263 containerd[1504]: time="2025-05-17T00:24:21.866725831Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:21.869313 containerd[1504]: time="2025-05-17T00:24:21.869280605Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 17 00:24:21.870564 containerd[1504]: time="2025-05-17T00:24:21.870543059Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:21.875290 containerd[1504]: time="2025-05-17T00:24:21.873863271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:21.875290 containerd[1504]: time="2025-05-17T00:24:21.874447070Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 2.038169584s" May 17 00:24:21.875290 containerd[1504]: time="2025-05-17T00:24:21.874469763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 17 00:24:21.876453 containerd[1504]: time="2025-05-17T00:24:21.876438013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:24:21.880830 containerd[1504]: time="2025-05-17T00:24:21.880812631Z" level=info msg="CreateContainer within sandbox \"966dbb7262a3c9aed2d9902b8ad215fb557e804c1bf2dd91e57d27d3b301a328\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 00:24:21.898766 containerd[1504]: time="2025-05-17T00:24:21.898725807Z" level=info msg="CreateContainer within sandbox \"966dbb7262a3c9aed2d9902b8ad215fb557e804c1bf2dd91e57d27d3b301a328\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"359f6b69cecf92d562f43356ff2e2691da51e5d1595c5c31cce9bf0da32ab396\"" May 17 00:24:21.899475 containerd[1504]: time="2025-05-17T00:24:21.899458663Z" level=info msg="StartContainer for \"359f6b69cecf92d562f43356ff2e2691da51e5d1595c5c31cce9bf0da32ab396\"" May 17 00:24:21.942579 systemd[1]: Started cri-containerd-359f6b69cecf92d562f43356ff2e2691da51e5d1595c5c31cce9bf0da32ab396.scope - libcontainer container 359f6b69cecf92d562f43356ff2e2691da51e5d1595c5c31cce9bf0da32ab396. May 17 00:24:21.984180 containerd[1504]: time="2025-05-17T00:24:21.984040100Z" level=info msg="StartContainer for \"359f6b69cecf92d562f43356ff2e2691da51e5d1595c5c31cce9bf0da32ab396\" returns successfully" May 17 00:24:22.182141 containerd[1504]: time="2025-05-17T00:24:22.181999321Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:24:22.183475 containerd[1504]: time="2025-05-17T00:24:22.183435821Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:24:22.183678 containerd[1504]: time="2025-05-17T00:24:22.183512164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:24:22.183716 kubelet[2693]: E0517 00:24:22.183672 2693 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:24:22.184067 kubelet[2693]: E0517 00:24:22.183723 2693 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:24:22.185265 containerd[1504]: time="2025-05-17T00:24:22.185027660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:24:22.188737 kubelet[2693]: E0517 00:24:22.188655 2693 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hc5qt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-9tht4_calico-system(1bde9b24-cd69-4946-af9c-950fec8a6c4b): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:24:22.190160 kubelet[2693]: E0517 00:24:22.190096 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-9tht4" podUID="1bde9b24-cd69-4946-af9c-950fec8a6c4b" May 17 00:24:22.670137 kubelet[2693]: E0517 00:24:22.670094 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-9tht4" podUID="1bde9b24-cd69-4946-af9c-950fec8a6c4b" May 17 00:24:25.724739 containerd[1504]: time="2025-05-17T00:24:25.724595849Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:25.727765 containerd[1504]: time="2025-05-17T00:24:25.727718876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 17 00:24:25.728662 containerd[1504]: time="2025-05-17T00:24:25.728632801Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:25.730485 containerd[1504]: time="2025-05-17T00:24:25.730455912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:25.733341 containerd[1504]: time="2025-05-17T00:24:25.733309637Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 3.548249057s" May 17 00:24:25.733341 containerd[1504]: time="2025-05-17T00:24:25.733339904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:24:25.750431 containerd[1504]: time="2025-05-17T00:24:25.749966463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:24:25.753725 containerd[1504]: time="2025-05-17T00:24:25.753674131Z" level=info msg="CreateContainer within sandbox \"6b4ea0b1e2d83ab6f034997ccbd33adb5893d022eecc93186d29eb2984f371f3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:24:25.767143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3174485208.mount: Deactivated successfully. May 17 00:24:25.773451 containerd[1504]: time="2025-05-17T00:24:25.773402558Z" level=info msg="CreateContainer within sandbox \"6b4ea0b1e2d83ab6f034997ccbd33adb5893d022eecc93186d29eb2984f371f3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0d2fa56d52f26cb6b671c8726e474cfad0d57302c8b51617baf2b622a9a5e9cd\"" May 17 00:24:25.775295 containerd[1504]: time="2025-05-17T00:24:25.774351728Z" level=info msg="StartContainer for \"0d2fa56d52f26cb6b671c8726e474cfad0d57302c8b51617baf2b622a9a5e9cd\"" May 17 00:24:25.858528 systemd[1]: run-containerd-runc-k8s.io-0d2fa56d52f26cb6b671c8726e474cfad0d57302c8b51617baf2b622a9a5e9cd-runc.G90Z12.mount: Deactivated successfully. May 17 00:24:25.868901 systemd[1]: Started cri-containerd-0d2fa56d52f26cb6b671c8726e474cfad0d57302c8b51617baf2b622a9a5e9cd.scope - libcontainer container 0d2fa56d52f26cb6b671c8726e474cfad0d57302c8b51617baf2b622a9a5e9cd. May 17 00:24:25.936641 containerd[1504]: time="2025-05-17T00:24:25.936530523Z" level=info msg="StartContainer for \"0d2fa56d52f26cb6b671c8726e474cfad0d57302c8b51617baf2b622a9a5e9cd\" returns successfully" May 17 00:24:26.242677 containerd[1504]: time="2025-05-17T00:24:26.242586020Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:26.244393 containerd[1504]: time="2025-05-17T00:24:26.244275432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 17 00:24:26.248267 containerd[1504]: time="2025-05-17T00:24:26.248209382Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 498.202554ms" May 17 00:24:26.248267 containerd[1504]: time="2025-05-17T00:24:26.248262271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:24:26.250495 containerd[1504]: time="2025-05-17T00:24:26.250467836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 00:24:26.255409 containerd[1504]: time="2025-05-17T00:24:26.255373188Z" level=info msg="CreateContainer within sandbox \"01b0dfe63760acab702566e82d27b19af8ac628299fe4f267ac25893e170ac77\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:24:26.269252 containerd[1504]: time="2025-05-17T00:24:26.269178457Z" level=info msg="CreateContainer within sandbox \"01b0dfe63760acab702566e82d27b19af8ac628299fe4f267ac25893e170ac77\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"800983f46c2f9fcd59be1f2f22da6927dd59c9375bfd4e25f52eee54d1b9df6a\"" May 17 00:24:26.271109 containerd[1504]: time="2025-05-17T00:24:26.270642058Z" level=info msg="StartContainer for \"800983f46c2f9fcd59be1f2f22da6927dd59c9375bfd4e25f52eee54d1b9df6a\"" May 17 00:24:26.310452 systemd[1]: Started cri-containerd-800983f46c2f9fcd59be1f2f22da6927dd59c9375bfd4e25f52eee54d1b9df6a.scope - libcontainer container 800983f46c2f9fcd59be1f2f22da6927dd59c9375bfd4e25f52eee54d1b9df6a. May 17 00:24:26.459587 containerd[1504]: time="2025-05-17T00:24:26.459527393Z" level=info msg="StartContainer for \"800983f46c2f9fcd59be1f2f22da6927dd59c9375bfd4e25f52eee54d1b9df6a\" returns successfully" May 17 00:24:26.716941 kubelet[2693]: I0517 00:24:26.715600 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-555577f7d7-qfnvl" podStartSLOduration=28.894440715 podStartE2EDuration="36.715577898s" podCreationTimestamp="2025-05-17 00:23:50 +0000 UTC" firstStartedPulling="2025-05-17 00:24:17.928592148 +0000 UTC m=+43.662144031" lastFinishedPulling="2025-05-17 00:24:25.749729321 +0000 UTC m=+51.483281214" observedRunningTime="2025-05-17 00:24:26.714232658 +0000 UTC m=+52.447784572" watchObservedRunningTime="2025-05-17 00:24:26.715577898 +0000 UTC m=+52.449129781" May 17 00:24:27.767863 kubelet[2693]: I0517 00:24:27.766979 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-555577f7d7-4kjkb" podStartSLOduration=29.525626496 podStartE2EDuration="37.766954187s" podCreationTimestamp="2025-05-17 00:23:50 +0000 UTC" firstStartedPulling="2025-05-17 00:24:18.007784626 +0000 UTC m=+43.741336509" lastFinishedPulling="2025-05-17 00:24:26.249112306 +0000 UTC m=+51.982664200" observedRunningTime="2025-05-17 00:24:26.734364311 +0000 UTC m=+52.467916214" watchObservedRunningTime="2025-05-17 00:24:27.766954187 +0000 UTC m=+53.500506071" May 17 00:24:28.548382 containerd[1504]: time="2025-05-17T00:24:28.546456982Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:28.548382 containerd[1504]: time="2025-05-17T00:24:28.547197324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 17 00:24:28.548382 containerd[1504]: time="2025-05-17T00:24:28.548023597Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:28.550889 containerd[1504]: time="2025-05-17T00:24:28.550839401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:28.551223 containerd[1504]: time="2025-05-17T00:24:28.551194824Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 2.300701582s" May 17 00:24:28.551393 containerd[1504]: time="2025-05-17T00:24:28.551223278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 17 00:24:28.570638 containerd[1504]: time="2025-05-17T00:24:28.569952811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:24:28.571777 containerd[1504]: time="2025-05-17T00:24:28.571750917Z" level=info msg="CreateContainer within sandbox \"966dbb7262a3c9aed2d9902b8ad215fb557e804c1bf2dd91e57d27d3b301a328\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 00:24:28.590519 containerd[1504]: time="2025-05-17T00:24:28.590474098Z" level=info msg="CreateContainer within sandbox \"966dbb7262a3c9aed2d9902b8ad215fb557e804c1bf2dd91e57d27d3b301a328\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"410b8a79da2d9af5e1c76e7d70b4afea93122222d620fc6aa2210ee1bcd19c2d\"" May 17 00:24:28.593018 containerd[1504]: time="2025-05-17T00:24:28.592565481Z" level=info msg="StartContainer for \"410b8a79da2d9af5e1c76e7d70b4afea93122222d620fc6aa2210ee1bcd19c2d\"" May 17 00:24:28.636471 systemd[1]: Started cri-containerd-410b8a79da2d9af5e1c76e7d70b4afea93122222d620fc6aa2210ee1bcd19c2d.scope - libcontainer container 410b8a79da2d9af5e1c76e7d70b4afea93122222d620fc6aa2210ee1bcd19c2d. May 17 00:24:28.671709 containerd[1504]: time="2025-05-17T00:24:28.671670187Z" level=info msg="StartContainer for \"410b8a79da2d9af5e1c76e7d70b4afea93122222d620fc6aa2210ee1bcd19c2d\" returns successfully" May 17 00:24:28.744446 kubelet[2693]: I0517 00:24:28.732380 2693 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:24:28.883012 containerd[1504]: time="2025-05-17T00:24:28.882738414Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:24:28.883997 containerd[1504]: time="2025-05-17T00:24:28.883844177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:24:28.888717 containerd[1504]: time="2025-05-17T00:24:28.888682137Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:24:28.889800 kubelet[2693]: E0517 00:24:28.889752 2693 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:24:28.891178 kubelet[2693]: E0517 00:24:28.891149 2693 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:24:28.893610 kubelet[2693]: E0517 00:24:28.892777 2693 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f857f946af1e45ef9789d38b050b55ff,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6htsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f599d797d-pw8hv_calico-system(067226bb-cfc6-4f82-99de-aac7391d466d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:24:28.894960 containerd[1504]: time="2025-05-17T00:24:28.894725264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:24:29.137237 kubelet[2693]: I0517 00:24:29.136462 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ht64s" podStartSLOduration=23.43643726 podStartE2EDuration="36.136443122s" podCreationTimestamp="2025-05-17 00:23:53 +0000 UTC" firstStartedPulling="2025-05-17 00:24:15.86954047 +0000 UTC m=+41.603092353" lastFinishedPulling="2025-05-17 00:24:28.569546332 +0000 UTC m=+54.303098215" observedRunningTime="2025-05-17 00:24:28.759299632 +0000 UTC m=+54.492851515" watchObservedRunningTime="2025-05-17 00:24:29.136443122 +0000 UTC m=+54.869995006" May 17 00:24:29.194266 containerd[1504]: time="2025-05-17T00:24:29.193961462Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:24:29.197004 containerd[1504]: time="2025-05-17T00:24:29.196850745Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:24:29.197004 containerd[1504]: time="2025-05-17T00:24:29.196961692Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:24:29.197341 kubelet[2693]: E0517 00:24:29.197266 2693 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:24:29.197341 kubelet[2693]: E0517 00:24:29.197317 2693 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:24:29.198054 kubelet[2693]: E0517 00:24:29.198001 2693 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6htsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f599d797d-pw8hv_calico-system(067226bb-cfc6-4f82-99de-aac7391d466d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:24:29.204626 kubelet[2693]: E0517 00:24:29.204568 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-f599d797d-pw8hv" podUID="067226bb-cfc6-4f82-99de-aac7391d466d" May 17 00:24:29.820423 kubelet[2693]: I0517 00:24:29.818046 2693 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 00:24:29.835008 kubelet[2693]: I0517 00:24:29.834907 2693 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 00:24:34.506414 containerd[1504]: time="2025-05-17T00:24:34.506117192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:24:34.574572 containerd[1504]: time="2025-05-17T00:24:34.574519807Z" level=info msg="StopPodSandbox for \"e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812\"" May 17 00:24:34.814195 containerd[1504]: time="2025-05-17T00:24:34.814049147Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:24:34.821729 containerd[1504]: time="2025-05-17T00:24:34.820827834Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:24:34.821729 containerd[1504]: time="2025-05-17T00:24:34.820912762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:24:34.901549 kubelet[2693]: E0517 00:24:34.863460 2693 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:24:34.906868 kubelet[2693]: E0517 00:24:34.906790 2693 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:24:34.931233 kubelet[2693]: E0517 00:24:34.931159 2693 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hc5qt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-9tht4_calico-system(1bde9b24-cd69-4946-af9c-950fec8a6c4b): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:24:34.943759 kubelet[2693]: E0517 00:24:34.938654 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-9tht4" podUID="1bde9b24-cd69-4946-af9c-950fec8a6c4b" May 17 00:24:35.100348 containerd[1504]: 2025-05-17 00:24:34.847 [WARNING][5274] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--82e895e080-k8s-calico--kube--controllers--8658f94dbd--xvh74-eth0", GenerateName:"calico-kube-controllers-8658f94dbd-", Namespace:"calico-system", SelfLink:"", UID:"428ec0d8-1aeb-46da-b77e-f1dfa05702b0", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8658f94dbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-82e895e080", ContainerID:"dcacecad2cec7ac134a2d170ef4a79f8ce615eaeeb04c050333b33d41d06358b", Pod:"calico-kube-controllers-8658f94dbd-xvh74", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.0.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1de8e1db570", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:35.100348 containerd[1504]: 2025-05-17 00:24:34.850 [INFO][5274] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" May 17 00:24:35.100348 containerd[1504]: 2025-05-17 00:24:34.850 [INFO][5274] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" iface="eth0" netns="" May 17 00:24:35.100348 containerd[1504]: 2025-05-17 00:24:34.850 [INFO][5274] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" May 17 00:24:35.100348 containerd[1504]: 2025-05-17 00:24:34.850 [INFO][5274] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" May 17 00:24:35.100348 containerd[1504]: 2025-05-17 00:24:35.067 [INFO][5283] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" HandleID="k8s-pod-network.e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--kube--controllers--8658f94dbd--xvh74-eth0" May 17 00:24:35.100348 containerd[1504]: 2025-05-17 00:24:35.068 [INFO][5283] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:35.100348 containerd[1504]: 2025-05-17 00:24:35.068 [INFO][5283] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:35.100348 containerd[1504]: 2025-05-17 00:24:35.090 [WARNING][5283] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" HandleID="k8s-pod-network.e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--kube--controllers--8658f94dbd--xvh74-eth0" May 17 00:24:35.100348 containerd[1504]: 2025-05-17 00:24:35.090 [INFO][5283] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" HandleID="k8s-pod-network.e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--kube--controllers--8658f94dbd--xvh74-eth0" May 17 00:24:35.100348 containerd[1504]: 2025-05-17 00:24:35.092 [INFO][5283] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:35.100348 containerd[1504]: 2025-05-17 00:24:35.095 [INFO][5274] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" May 17 00:24:35.108103 containerd[1504]: time="2025-05-17T00:24:35.100392918Z" level=info msg="TearDown network for sandbox \"e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812\" successfully" May 17 00:24:35.108103 containerd[1504]: time="2025-05-17T00:24:35.100424748Z" level=info msg="StopPodSandbox for \"e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812\" returns successfully" May 17 00:24:35.174598 containerd[1504]: time="2025-05-17T00:24:35.174543648Z" level=info msg="RemovePodSandbox for \"e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812\"" May 17 00:24:35.178643 containerd[1504]: time="2025-05-17T00:24:35.178603749Z" level=info msg="Forcibly stopping sandbox \"e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812\"" May 17 00:24:35.271719 containerd[1504]: 2025-05-17 00:24:35.214 [WARNING][5297] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--82e895e080-k8s-calico--kube--controllers--8658f94dbd--xvh74-eth0", GenerateName:"calico-kube-controllers-8658f94dbd-", Namespace:"calico-system", SelfLink:"", UID:"428ec0d8-1aeb-46da-b77e-f1dfa05702b0", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8658f94dbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-82e895e080", ContainerID:"dcacecad2cec7ac134a2d170ef4a79f8ce615eaeeb04c050333b33d41d06358b", Pod:"calico-kube-controllers-8658f94dbd-xvh74", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.0.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1de8e1db570", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:35.271719 containerd[1504]: 2025-05-17 00:24:35.214 [INFO][5297] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" May 17 00:24:35.271719 containerd[1504]: 2025-05-17 00:24:35.214 [INFO][5297] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" iface="eth0" netns="" May 17 00:24:35.271719 containerd[1504]: 2025-05-17 00:24:35.214 [INFO][5297] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" May 17 00:24:35.271719 containerd[1504]: 2025-05-17 00:24:35.214 [INFO][5297] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" May 17 00:24:35.271719 containerd[1504]: 2025-05-17 00:24:35.239 [INFO][5304] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" HandleID="k8s-pod-network.e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--kube--controllers--8658f94dbd--xvh74-eth0" May 17 00:24:35.271719 containerd[1504]: 2025-05-17 00:24:35.239 [INFO][5304] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:35.271719 containerd[1504]: 2025-05-17 00:24:35.241 [INFO][5304] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:35.271719 containerd[1504]: 2025-05-17 00:24:35.256 [WARNING][5304] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" HandleID="k8s-pod-network.e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--kube--controllers--8658f94dbd--xvh74-eth0" May 17 00:24:35.271719 containerd[1504]: 2025-05-17 00:24:35.256 [INFO][5304] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" HandleID="k8s-pod-network.e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--kube--controllers--8658f94dbd--xvh74-eth0" May 17 00:24:35.271719 containerd[1504]: 2025-05-17 00:24:35.258 [INFO][5304] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:35.271719 containerd[1504]: 2025-05-17 00:24:35.265 [INFO][5297] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812" May 17 00:24:35.273113 containerd[1504]: time="2025-05-17T00:24:35.271791964Z" level=info msg="TearDown network for sandbox \"e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812\" successfully" May 17 00:24:35.411780 containerd[1504]: time="2025-05-17T00:24:35.411659252Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:24:35.418899 containerd[1504]: time="2025-05-17T00:24:35.418726579Z" level=info msg="RemovePodSandbox \"e7e838287fc450f89fda5980aae76692390f23cf4a4607a91c3c87d773b3b812\" returns successfully" May 17 00:24:35.425776 containerd[1504]: time="2025-05-17T00:24:35.425752598Z" level=info msg="StopPodSandbox for \"c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8\"" May 17 00:24:35.509170 containerd[1504]: 2025-05-17 00:24:35.474 [WARNING][5318] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--2n42x-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"30984a25-4953-4a27-9699-f4c7434a26ed", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-82e895e080", ContainerID:"95524c3259d7a92f4c340f3adbc97955342dd2dfe7862d9b8c21271938104228", Pod:"coredns-668d6bf9bc-2n42x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia87ed38347f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:35.509170 containerd[1504]: 2025-05-17 00:24:35.475 [INFO][5318] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" May 17 00:24:35.509170 containerd[1504]: 2025-05-17 00:24:35.475 [INFO][5318] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" iface="eth0" netns="" May 17 00:24:35.509170 containerd[1504]: 2025-05-17 00:24:35.475 [INFO][5318] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" May 17 00:24:35.509170 containerd[1504]: 2025-05-17 00:24:35.475 [INFO][5318] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" May 17 00:24:35.509170 containerd[1504]: 2025-05-17 00:24:35.497 [INFO][5325] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" HandleID="k8s-pod-network.c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" Workload="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--2n42x-eth0" May 17 00:24:35.509170 containerd[1504]: 2025-05-17 00:24:35.497 [INFO][5325] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:35.509170 containerd[1504]: 2025-05-17 00:24:35.497 [INFO][5325] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:35.509170 containerd[1504]: 2025-05-17 00:24:35.502 [WARNING][5325] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" HandleID="k8s-pod-network.c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" Workload="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--2n42x-eth0" May 17 00:24:35.509170 containerd[1504]: 2025-05-17 00:24:35.502 [INFO][5325] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" HandleID="k8s-pod-network.c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" Workload="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--2n42x-eth0" May 17 00:24:35.509170 containerd[1504]: 2025-05-17 00:24:35.503 [INFO][5325] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:35.509170 containerd[1504]: 2025-05-17 00:24:35.505 [INFO][5318] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" May 17 00:24:35.509170 containerd[1504]: time="2025-05-17T00:24:35.507565483Z" level=info msg="TearDown network for sandbox \"c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8\" successfully" May 17 00:24:35.509170 containerd[1504]: time="2025-05-17T00:24:35.507590911Z" level=info msg="StopPodSandbox for \"c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8\" returns successfully" May 17 00:24:35.509170 containerd[1504]: time="2025-05-17T00:24:35.508440758Z" level=info msg="RemovePodSandbox for \"c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8\"" May 17 00:24:35.509170 containerd[1504]: time="2025-05-17T00:24:35.508490611Z" level=info msg="Forcibly stopping sandbox \"c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8\"" May 17 00:24:35.612195 containerd[1504]: 2025-05-17 00:24:35.543 [WARNING][5339] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--2n42x-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"30984a25-4953-4a27-9699-f4c7434a26ed", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-82e895e080", ContainerID:"95524c3259d7a92f4c340f3adbc97955342dd2dfe7862d9b8c21271938104228", Pod:"coredns-668d6bf9bc-2n42x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia87ed38347f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:35.612195 containerd[1504]: 2025-05-17 00:24:35.543 [INFO][5339] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" May 17 00:24:35.612195 containerd[1504]: 2025-05-17 00:24:35.543 [INFO][5339] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" iface="eth0" netns="" May 17 00:24:35.612195 containerd[1504]: 2025-05-17 00:24:35.543 [INFO][5339] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" May 17 00:24:35.612195 containerd[1504]: 2025-05-17 00:24:35.543 [INFO][5339] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" May 17 00:24:35.612195 containerd[1504]: 2025-05-17 00:24:35.589 [INFO][5346] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" HandleID="k8s-pod-network.c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" Workload="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--2n42x-eth0" May 17 00:24:35.612195 containerd[1504]: 2025-05-17 00:24:35.589 [INFO][5346] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:35.612195 containerd[1504]: 2025-05-17 00:24:35.589 [INFO][5346] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:35.612195 containerd[1504]: 2025-05-17 00:24:35.598 [WARNING][5346] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" HandleID="k8s-pod-network.c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" Workload="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--2n42x-eth0" May 17 00:24:35.612195 containerd[1504]: 2025-05-17 00:24:35.598 [INFO][5346] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" HandleID="k8s-pod-network.c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" Workload="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--2n42x-eth0" May 17 00:24:35.612195 containerd[1504]: 2025-05-17 00:24:35.599 [INFO][5346] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:35.612195 containerd[1504]: 2025-05-17 00:24:35.602 [INFO][5339] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8" May 17 00:24:35.612195 containerd[1504]: time="2025-05-17T00:24:35.611440395Z" level=info msg="TearDown network for sandbox \"c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8\" successfully" May 17 00:24:35.641198 containerd[1504]: time="2025-05-17T00:24:35.641120194Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:24:35.641514 containerd[1504]: time="2025-05-17T00:24:35.641496296Z" level=info msg="RemovePodSandbox \"c3d3ab3e66e1b19d5d0a6670d5349331d2218d6c3775034b1f23ff75e9eed5d8\" returns successfully" May 17 00:24:35.659265 containerd[1504]: time="2025-05-17T00:24:35.658528870Z" level=info msg="StopPodSandbox for \"8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e\"" May 17 00:24:35.744365 containerd[1504]: 2025-05-17 00:24:35.691 [WARNING][5361] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--8bdt9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"141e29e6-7c60-4ef0-8843-86313045c72f", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-82e895e080", ContainerID:"c82e6dc0b33104df2b7d80ce38e7b06ef89169d982caacc51d4928fa09bee745", Pod:"coredns-668d6bf9bc-8bdt9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e87e947f89", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:35.744365 containerd[1504]: 2025-05-17 00:24:35.691 [INFO][5361] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" May 17 00:24:35.744365 containerd[1504]: 2025-05-17 00:24:35.691 [INFO][5361] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" iface="eth0" netns="" May 17 00:24:35.744365 containerd[1504]: 2025-05-17 00:24:35.691 [INFO][5361] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" May 17 00:24:35.744365 containerd[1504]: 2025-05-17 00:24:35.691 [INFO][5361] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" May 17 00:24:35.744365 containerd[1504]: 2025-05-17 00:24:35.717 [INFO][5369] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" HandleID="k8s-pod-network.8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" Workload="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--8bdt9-eth0" May 17 00:24:35.744365 containerd[1504]: 2025-05-17 00:24:35.717 [INFO][5369] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:35.744365 containerd[1504]: 2025-05-17 00:24:35.717 [INFO][5369] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:35.744365 containerd[1504]: 2025-05-17 00:24:35.733 [WARNING][5369] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" HandleID="k8s-pod-network.8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" Workload="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--8bdt9-eth0" May 17 00:24:35.744365 containerd[1504]: 2025-05-17 00:24:35.733 [INFO][5369] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" HandleID="k8s-pod-network.8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" Workload="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--8bdt9-eth0" May 17 00:24:35.744365 containerd[1504]: 2025-05-17 00:24:35.736 [INFO][5369] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:35.744365 containerd[1504]: 2025-05-17 00:24:35.740 [INFO][5361] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" May 17 00:24:35.744365 containerd[1504]: time="2025-05-17T00:24:35.743757344Z" level=info msg="TearDown network for sandbox \"8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e\" successfully" May 17 00:24:35.744365 containerd[1504]: time="2025-05-17T00:24:35.743875654Z" level=info msg="StopPodSandbox for \"8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e\" returns successfully" May 17 00:24:35.746735 containerd[1504]: time="2025-05-17T00:24:35.745926113Z" level=info msg="RemovePodSandbox for \"8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e\"" May 17 00:24:35.746735 containerd[1504]: time="2025-05-17T00:24:35.746092284Z" level=info msg="Forcibly stopping sandbox \"8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e\"" May 17 00:24:35.888701 containerd[1504]: 2025-05-17 00:24:35.828 [WARNING][5392] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--8bdt9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"141e29e6-7c60-4ef0-8843-86313045c72f", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-82e895e080", ContainerID:"c82e6dc0b33104df2b7d80ce38e7b06ef89169d982caacc51d4928fa09bee745", Pod:"coredns-668d6bf9bc-8bdt9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e87e947f89", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:35.888701 containerd[1504]: 2025-05-17 00:24:35.829 [INFO][5392] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" May 17 00:24:35.888701 containerd[1504]: 2025-05-17 00:24:35.829 [INFO][5392] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" iface="eth0" netns="" May 17 00:24:35.888701 containerd[1504]: 2025-05-17 00:24:35.829 [INFO][5392] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" May 17 00:24:35.888701 containerd[1504]: 2025-05-17 00:24:35.829 [INFO][5392] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" May 17 00:24:35.888701 containerd[1504]: 2025-05-17 00:24:35.870 [INFO][5399] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" HandleID="k8s-pod-network.8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" Workload="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--8bdt9-eth0" May 17 00:24:35.888701 containerd[1504]: 2025-05-17 00:24:35.870 [INFO][5399] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:35.888701 containerd[1504]: 2025-05-17 00:24:35.871 [INFO][5399] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:35.888701 containerd[1504]: 2025-05-17 00:24:35.879 [WARNING][5399] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" HandleID="k8s-pod-network.8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" Workload="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--8bdt9-eth0" May 17 00:24:35.888701 containerd[1504]: 2025-05-17 00:24:35.879 [INFO][5399] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" HandleID="k8s-pod-network.8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" Workload="ci--4081--3--3--n--82e895e080-k8s-coredns--668d6bf9bc--8bdt9-eth0" May 17 00:24:35.888701 containerd[1504]: 2025-05-17 00:24:35.882 [INFO][5399] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:35.888701 containerd[1504]: 2025-05-17 00:24:35.885 [INFO][5392] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e" May 17 00:24:35.890121 containerd[1504]: time="2025-05-17T00:24:35.890023279Z" level=info msg="TearDown network for sandbox \"8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e\" successfully" May 17 00:24:35.895488 containerd[1504]: time="2025-05-17T00:24:35.895423443Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:24:35.895928 containerd[1504]: time="2025-05-17T00:24:35.895609160Z" level=info msg="RemovePodSandbox \"8b9b7e04ad58ffc6fb0e0ef161ea7cf912bb65a8b7e2d84210e35f65b2c46b0e\" returns successfully" May 17 00:24:35.896847 containerd[1504]: time="2025-05-17T00:24:35.896645805Z" level=info msg="StopPodSandbox for \"fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa\"" May 17 00:24:36.008870 containerd[1504]: 2025-05-17 00:24:35.960 [WARNING][5413] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--qfnvl-eth0", GenerateName:"calico-apiserver-555577f7d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"114d0358-ddcf-4c04-bb03-89411102b031", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"555577f7d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-82e895e080", ContainerID:"6b4ea0b1e2d83ab6f034997ccbd33adb5893d022eecc93186d29eb2984f371f3", Pod:"calico-apiserver-555577f7d7-qfnvl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliadb4e9e3a02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:36.008870 containerd[1504]: 2025-05-17 00:24:35.961 [INFO][5413] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" May 17 00:24:36.008870 containerd[1504]: 2025-05-17 00:24:35.961 [INFO][5413] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" iface="eth0" netns="" May 17 00:24:36.008870 containerd[1504]: 2025-05-17 00:24:35.962 [INFO][5413] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" May 17 00:24:36.008870 containerd[1504]: 2025-05-17 00:24:35.962 [INFO][5413] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" May 17 00:24:36.008870 containerd[1504]: 2025-05-17 00:24:35.992 [INFO][5420] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" HandleID="k8s-pod-network.fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--qfnvl-eth0" May 17 00:24:36.008870 containerd[1504]: 2025-05-17 00:24:35.992 [INFO][5420] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:36.008870 containerd[1504]: 2025-05-17 00:24:35.992 [INFO][5420] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:36.008870 containerd[1504]: 2025-05-17 00:24:36.000 [WARNING][5420] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" HandleID="k8s-pod-network.fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--qfnvl-eth0" May 17 00:24:36.008870 containerd[1504]: 2025-05-17 00:24:36.000 [INFO][5420] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" HandleID="k8s-pod-network.fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--qfnvl-eth0" May 17 00:24:36.008870 containerd[1504]: 2025-05-17 00:24:36.003 [INFO][5420] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:36.008870 containerd[1504]: 2025-05-17 00:24:36.006 [INFO][5413] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" May 17 00:24:36.010462 containerd[1504]: time="2025-05-17T00:24:36.008869517Z" level=info msg="TearDown network for sandbox \"fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa\" successfully" May 17 00:24:36.010462 containerd[1504]: time="2025-05-17T00:24:36.008898281Z" level=info msg="StopPodSandbox for \"fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa\" returns successfully" May 17 00:24:36.010462 containerd[1504]: time="2025-05-17T00:24:36.010036276Z" level=info msg="RemovePodSandbox for \"fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa\"" May 17 00:24:36.010462 containerd[1504]: time="2025-05-17T00:24:36.010062344Z" level=info msg="Forcibly stopping sandbox \"fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa\"" May 17 00:24:36.182230 containerd[1504]: 2025-05-17 00:24:36.063 [WARNING][5434] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--qfnvl-eth0", GenerateName:"calico-apiserver-555577f7d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"114d0358-ddcf-4c04-bb03-89411102b031", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"555577f7d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-82e895e080", ContainerID:"6b4ea0b1e2d83ab6f034997ccbd33adb5893d022eecc93186d29eb2984f371f3", Pod:"calico-apiserver-555577f7d7-qfnvl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliadb4e9e3a02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:36.182230 containerd[1504]: 2025-05-17 00:24:36.064 [INFO][5434] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" May 17 00:24:36.182230 containerd[1504]: 2025-05-17 00:24:36.064 [INFO][5434] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" iface="eth0" netns="" May 17 00:24:36.182230 containerd[1504]: 2025-05-17 00:24:36.064 [INFO][5434] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" May 17 00:24:36.182230 containerd[1504]: 2025-05-17 00:24:36.065 [INFO][5434] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" May 17 00:24:36.182230 containerd[1504]: 2025-05-17 00:24:36.108 [INFO][5442] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" HandleID="k8s-pod-network.fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--qfnvl-eth0" May 17 00:24:36.182230 containerd[1504]: 2025-05-17 00:24:36.109 [INFO][5442] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:36.182230 containerd[1504]: 2025-05-17 00:24:36.109 [INFO][5442] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:36.182230 containerd[1504]: 2025-05-17 00:24:36.124 [WARNING][5442] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" HandleID="k8s-pod-network.fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--qfnvl-eth0" May 17 00:24:36.182230 containerd[1504]: 2025-05-17 00:24:36.124 [INFO][5442] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" HandleID="k8s-pod-network.fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--qfnvl-eth0" May 17 00:24:36.182230 containerd[1504]: 2025-05-17 00:24:36.128 [INFO][5442] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:36.182230 containerd[1504]: 2025-05-17 00:24:36.132 [INFO][5434] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa" May 17 00:24:36.182230 containerd[1504]: time="2025-05-17T00:24:36.180045829Z" level=info msg="TearDown network for sandbox \"fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa\" successfully" May 17 00:24:36.243264 containerd[1504]: time="2025-05-17T00:24:36.242579924Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:24:36.243527 containerd[1504]: time="2025-05-17T00:24:36.243499501Z" level=info msg="RemovePodSandbox \"fa936c9511dea8f9a9bc6a8e691cdca43c7192f5314991837c033de699e371aa\" returns successfully" May 17 00:24:36.246178 containerd[1504]: time="2025-05-17T00:24:36.246074199Z" level=info msg="StopPodSandbox for \"dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac\"" May 17 00:24:36.344489 containerd[1504]: 2025-05-17 00:24:36.299 [WARNING][5457] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--82e895e080-k8s-goldmane--78d55f7ddc--9tht4-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"1bde9b24-cd69-4946-af9c-950fec8a6c4b", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-82e895e080", ContainerID:"8590339d02798dcfe23985dfc160b421c473b04bc845ada125df701071f1d8d0", Pod:"goldmane-78d55f7ddc-9tht4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.0.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali40d0116a55a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:36.344489 containerd[1504]: 2025-05-17 00:24:36.299 [INFO][5457] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" May 17 00:24:36.344489 containerd[1504]: 2025-05-17 00:24:36.299 [INFO][5457] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" iface="eth0" netns="" May 17 00:24:36.344489 containerd[1504]: 2025-05-17 00:24:36.300 [INFO][5457] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" May 17 00:24:36.344489 containerd[1504]: 2025-05-17 00:24:36.300 [INFO][5457] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" May 17 00:24:36.344489 containerd[1504]: 2025-05-17 00:24:36.329 [INFO][5464] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" HandleID="k8s-pod-network.dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" Workload="ci--4081--3--3--n--82e895e080-k8s-goldmane--78d55f7ddc--9tht4-eth0" May 17 00:24:36.344489 containerd[1504]: 2025-05-17 00:24:36.329 [INFO][5464] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:36.344489 containerd[1504]: 2025-05-17 00:24:36.329 [INFO][5464] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:36.344489 containerd[1504]: 2025-05-17 00:24:36.336 [WARNING][5464] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" HandleID="k8s-pod-network.dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" Workload="ci--4081--3--3--n--82e895e080-k8s-goldmane--78d55f7ddc--9tht4-eth0" May 17 00:24:36.344489 containerd[1504]: 2025-05-17 00:24:36.336 [INFO][5464] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" HandleID="k8s-pod-network.dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" Workload="ci--4081--3--3--n--82e895e080-k8s-goldmane--78d55f7ddc--9tht4-eth0" May 17 00:24:36.344489 containerd[1504]: 2025-05-17 00:24:36.338 [INFO][5464] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:36.344489 containerd[1504]: 2025-05-17 00:24:36.341 [INFO][5457] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" May 17 00:24:36.347232 containerd[1504]: time="2025-05-17T00:24:36.344949979Z" level=info msg="TearDown network for sandbox \"dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac\" successfully" May 17 00:24:36.347232 containerd[1504]: time="2025-05-17T00:24:36.345004781Z" level=info msg="StopPodSandbox for \"dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac\" returns successfully" May 17 00:24:36.347232 containerd[1504]: time="2025-05-17T00:24:36.346463104Z" level=info msg="RemovePodSandbox for \"dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac\"" May 17 00:24:36.347232 containerd[1504]: time="2025-05-17T00:24:36.346490555Z" level=info msg="Forcibly stopping sandbox \"dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac\"" May 17 00:24:36.438132 containerd[1504]: 2025-05-17 00:24:36.401 [WARNING][5478] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--82e895e080-k8s-goldmane--78d55f7ddc--9tht4-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"1bde9b24-cd69-4946-af9c-950fec8a6c4b", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-82e895e080", ContainerID:"8590339d02798dcfe23985dfc160b421c473b04bc845ada125df701071f1d8d0", Pod:"goldmane-78d55f7ddc-9tht4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.0.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali40d0116a55a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:36.438132 containerd[1504]: 2025-05-17 00:24:36.402 [INFO][5478] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" May 17 00:24:36.438132 containerd[1504]: 2025-05-17 00:24:36.402 [INFO][5478] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" iface="eth0" netns="" May 17 00:24:36.438132 containerd[1504]: 2025-05-17 00:24:36.402 [INFO][5478] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" May 17 00:24:36.438132 containerd[1504]: 2025-05-17 00:24:36.402 [INFO][5478] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" May 17 00:24:36.438132 containerd[1504]: 2025-05-17 00:24:36.427 [INFO][5485] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" HandleID="k8s-pod-network.dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" Workload="ci--4081--3--3--n--82e895e080-k8s-goldmane--78d55f7ddc--9tht4-eth0" May 17 00:24:36.438132 containerd[1504]: 2025-05-17 00:24:36.427 [INFO][5485] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:36.438132 containerd[1504]: 2025-05-17 00:24:36.427 [INFO][5485] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:36.438132 containerd[1504]: 2025-05-17 00:24:36.432 [WARNING][5485] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" HandleID="k8s-pod-network.dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" Workload="ci--4081--3--3--n--82e895e080-k8s-goldmane--78d55f7ddc--9tht4-eth0" May 17 00:24:36.438132 containerd[1504]: 2025-05-17 00:24:36.432 [INFO][5485] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" HandleID="k8s-pod-network.dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" Workload="ci--4081--3--3--n--82e895e080-k8s-goldmane--78d55f7ddc--9tht4-eth0" May 17 00:24:36.438132 containerd[1504]: 2025-05-17 00:24:36.434 [INFO][5485] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:36.438132 containerd[1504]: 2025-05-17 00:24:36.436 [INFO][5478] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac" May 17 00:24:36.439853 containerd[1504]: time="2025-05-17T00:24:36.438208478Z" level=info msg="TearDown network for sandbox \"dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac\" successfully" May 17 00:24:36.441474 containerd[1504]: time="2025-05-17T00:24:36.441449491Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:24:36.441868 containerd[1504]: time="2025-05-17T00:24:36.441502630Z" level=info msg="RemovePodSandbox \"dc50902ba7be5eadfbc70969cbb7076d80d327f57115b638c06941e26fa3a0ac\" returns successfully" May 17 00:24:36.452281 containerd[1504]: time="2025-05-17T00:24:36.452203152Z" level=info msg="StopPodSandbox for \"3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316\"" May 17 00:24:36.514982 containerd[1504]: 2025-05-17 00:24:36.480 [WARNING][5499] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--82e895e080-k8s-csi--node--driver--ht64s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"79f08894-50f6-4f34-ab06-f713767f2567", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-82e895e080", ContainerID:"966dbb7262a3c9aed2d9902b8ad215fb557e804c1bf2dd91e57d27d3b301a328", Pod:"csi-node-driver-ht64s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.0.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali597f6480dfa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:36.514982 containerd[1504]: 2025-05-17 00:24:36.481 [INFO][5499] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" May 17 00:24:36.514982 containerd[1504]: 2025-05-17 00:24:36.481 [INFO][5499] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" iface="eth0" netns="" May 17 00:24:36.514982 containerd[1504]: 2025-05-17 00:24:36.481 [INFO][5499] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" May 17 00:24:36.514982 containerd[1504]: 2025-05-17 00:24:36.481 [INFO][5499] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" May 17 00:24:36.514982 containerd[1504]: 2025-05-17 00:24:36.504 [INFO][5506] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" HandleID="k8s-pod-network.3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" Workload="ci--4081--3--3--n--82e895e080-k8s-csi--node--driver--ht64s-eth0" May 17 00:24:36.514982 containerd[1504]: 2025-05-17 00:24:36.504 [INFO][5506] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:36.514982 containerd[1504]: 2025-05-17 00:24:36.504 [INFO][5506] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:36.514982 containerd[1504]: 2025-05-17 00:24:36.509 [WARNING][5506] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" HandleID="k8s-pod-network.3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" Workload="ci--4081--3--3--n--82e895e080-k8s-csi--node--driver--ht64s-eth0" May 17 00:24:36.514982 containerd[1504]: 2025-05-17 00:24:36.509 [INFO][5506] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" HandleID="k8s-pod-network.3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" Workload="ci--4081--3--3--n--82e895e080-k8s-csi--node--driver--ht64s-eth0" May 17 00:24:36.514982 containerd[1504]: 2025-05-17 00:24:36.511 [INFO][5506] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:36.514982 containerd[1504]: 2025-05-17 00:24:36.513 [INFO][5499] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" May 17 00:24:36.517381 containerd[1504]: time="2025-05-17T00:24:36.515021287Z" level=info msg="TearDown network for sandbox \"3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316\" successfully" May 17 00:24:36.517381 containerd[1504]: time="2025-05-17T00:24:36.515044210Z" level=info msg="StopPodSandbox for \"3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316\" returns successfully" May 17 00:24:36.517381 containerd[1504]: time="2025-05-17T00:24:36.515546888Z" level=info msg="RemovePodSandbox for \"3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316\"" May 17 00:24:36.517381 containerd[1504]: time="2025-05-17T00:24:36.515570082Z" level=info msg="Forcibly stopping sandbox \"3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316\"" May 17 00:24:36.578135 containerd[1504]: 2025-05-17 00:24:36.548 [WARNING][5520] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--82e895e080-k8s-csi--node--driver--ht64s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"79f08894-50f6-4f34-ab06-f713767f2567", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-82e895e080", ContainerID:"966dbb7262a3c9aed2d9902b8ad215fb557e804c1bf2dd91e57d27d3b301a328", Pod:"csi-node-driver-ht64s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.0.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali597f6480dfa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:36.578135 containerd[1504]: 2025-05-17 00:24:36.549 [INFO][5520] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" May 17 00:24:36.578135 containerd[1504]: 2025-05-17 00:24:36.549 [INFO][5520] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" iface="eth0" netns="" May 17 00:24:36.578135 containerd[1504]: 2025-05-17 00:24:36.549 [INFO][5520] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" May 17 00:24:36.578135 containerd[1504]: 2025-05-17 00:24:36.549 [INFO][5520] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" May 17 00:24:36.578135 containerd[1504]: 2025-05-17 00:24:36.566 [INFO][5527] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" HandleID="k8s-pod-network.3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" Workload="ci--4081--3--3--n--82e895e080-k8s-csi--node--driver--ht64s-eth0" May 17 00:24:36.578135 containerd[1504]: 2025-05-17 00:24:36.567 [INFO][5527] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:36.578135 containerd[1504]: 2025-05-17 00:24:36.567 [INFO][5527] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:36.578135 containerd[1504]: 2025-05-17 00:24:36.572 [WARNING][5527] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" HandleID="k8s-pod-network.3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" Workload="ci--4081--3--3--n--82e895e080-k8s-csi--node--driver--ht64s-eth0" May 17 00:24:36.578135 containerd[1504]: 2025-05-17 00:24:36.572 [INFO][5527] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" HandleID="k8s-pod-network.3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" Workload="ci--4081--3--3--n--82e895e080-k8s-csi--node--driver--ht64s-eth0" May 17 00:24:36.578135 containerd[1504]: 2025-05-17 00:24:36.573 [INFO][5527] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:36.578135 containerd[1504]: 2025-05-17 00:24:36.575 [INFO][5520] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316" May 17 00:24:36.580324 containerd[1504]: time="2025-05-17T00:24:36.578181873Z" level=info msg="TearDown network for sandbox \"3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316\" successfully" May 17 00:24:36.581259 containerd[1504]: time="2025-05-17T00:24:36.581210769Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:24:36.581351 containerd[1504]: time="2025-05-17T00:24:36.581332557Z" level=info msg="RemovePodSandbox \"3e68f8552d81667047e7740ed81b005eb562625e975c37459feb2cf593b58316\" returns successfully" May 17 00:24:36.581861 containerd[1504]: time="2025-05-17T00:24:36.581839954Z" level=info msg="StopPodSandbox for \"bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49\"" May 17 00:24:36.645833 containerd[1504]: 2025-05-17 00:24:36.612 [WARNING][5541] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--4kjkb-eth0", GenerateName:"calico-apiserver-555577f7d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"0e2dc52f-271c-43c5-9af2-6be78554f3c4", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"555577f7d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-82e895e080", ContainerID:"01b0dfe63760acab702566e82d27b19af8ac628299fe4f267ac25893e170ac77", Pod:"calico-apiserver-555577f7d7-4kjkb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia7a84f39ea0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:36.645833 containerd[1504]: 2025-05-17 00:24:36.613 [INFO][5541] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" May 17 00:24:36.645833 containerd[1504]: 2025-05-17 00:24:36.613 [INFO][5541] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" iface="eth0" netns="" May 17 00:24:36.645833 containerd[1504]: 2025-05-17 00:24:36.613 [INFO][5541] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" May 17 00:24:36.645833 containerd[1504]: 2025-05-17 00:24:36.613 [INFO][5541] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" May 17 00:24:36.645833 containerd[1504]: 2025-05-17 00:24:36.633 [INFO][5548] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" HandleID="k8s-pod-network.bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--4kjkb-eth0" May 17 00:24:36.645833 containerd[1504]: 2025-05-17 00:24:36.633 [INFO][5548] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:36.645833 containerd[1504]: 2025-05-17 00:24:36.633 [INFO][5548] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:36.645833 containerd[1504]: 2025-05-17 00:24:36.640 [WARNING][5548] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" HandleID="k8s-pod-network.bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--4kjkb-eth0" May 17 00:24:36.645833 containerd[1504]: 2025-05-17 00:24:36.640 [INFO][5548] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" HandleID="k8s-pod-network.bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--4kjkb-eth0" May 17 00:24:36.645833 containerd[1504]: 2025-05-17 00:24:36.641 [INFO][5548] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:36.645833 containerd[1504]: 2025-05-17 00:24:36.643 [INFO][5541] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" May 17 00:24:36.647291 containerd[1504]: time="2025-05-17T00:24:36.646942375Z" level=info msg="TearDown network for sandbox \"bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49\" successfully" May 17 00:24:36.647291 containerd[1504]: time="2025-05-17T00:24:36.646973974Z" level=info msg="StopPodSandbox for \"bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49\" returns successfully" May 17 00:24:36.648106 containerd[1504]: time="2025-05-17T00:24:36.648080981Z" level=info msg="RemovePodSandbox for \"bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49\"" May 17 00:24:36.648206 containerd[1504]: time="2025-05-17T00:24:36.648188301Z" level=info msg="Forcibly stopping sandbox \"bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49\"" May 17 00:24:36.719871 containerd[1504]: 2025-05-17 00:24:36.678 [WARNING][5563] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--4kjkb-eth0", GenerateName:"calico-apiserver-555577f7d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"0e2dc52f-271c-43c5-9af2-6be78554f3c4", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"555577f7d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-82e895e080", ContainerID:"01b0dfe63760acab702566e82d27b19af8ac628299fe4f267ac25893e170ac77", Pod:"calico-apiserver-555577f7d7-4kjkb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia7a84f39ea0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:36.719871 containerd[1504]: 2025-05-17 00:24:36.680 [INFO][5563] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" May 17 00:24:36.719871 containerd[1504]: 2025-05-17 00:24:36.680 [INFO][5563] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" iface="eth0" netns="" May 17 00:24:36.719871 containerd[1504]: 2025-05-17 00:24:36.680 [INFO][5563] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" May 17 00:24:36.719871 containerd[1504]: 2025-05-17 00:24:36.680 [INFO][5563] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" May 17 00:24:36.719871 containerd[1504]: 2025-05-17 00:24:36.702 [INFO][5570] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" HandleID="k8s-pod-network.bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--4kjkb-eth0" May 17 00:24:36.719871 containerd[1504]: 2025-05-17 00:24:36.702 [INFO][5570] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:36.719871 containerd[1504]: 2025-05-17 00:24:36.702 [INFO][5570] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:36.719871 containerd[1504]: 2025-05-17 00:24:36.712 [WARNING][5570] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" HandleID="k8s-pod-network.bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--4kjkb-eth0" May 17 00:24:36.719871 containerd[1504]: 2025-05-17 00:24:36.712 [INFO][5570] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" HandleID="k8s-pod-network.bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" Workload="ci--4081--3--3--n--82e895e080-k8s-calico--apiserver--555577f7d7--4kjkb-eth0" May 17 00:24:36.719871 containerd[1504]: 2025-05-17 00:24:36.713 [INFO][5570] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:36.719871 containerd[1504]: 2025-05-17 00:24:36.715 [INFO][5563] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49" May 17 00:24:36.720874 containerd[1504]: time="2025-05-17T00:24:36.719911858Z" level=info msg="TearDown network for sandbox \"bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49\" successfully" May 17 00:24:36.729127 containerd[1504]: time="2025-05-17T00:24:36.727365796Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:24:36.729127 containerd[1504]: time="2025-05-17T00:24:36.727488596Z" level=info msg="RemovePodSandbox \"bfebe88ae68f70d0c1629289739522a76e4ce066bf44937c5f8b06fe77478e49\" returns successfully" May 17 00:24:36.768994 containerd[1504]: time="2025-05-17T00:24:36.768683973Z" level=info msg="StopPodSandbox for \"97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b\"" May 17 00:24:36.883772 containerd[1504]: 2025-05-17 00:24:36.839 [WARNING][5584] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-whisker--955f4745b--jt295-eth0" May 17 00:24:36.883772 containerd[1504]: 2025-05-17 00:24:36.839 [INFO][5584] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" May 17 00:24:36.883772 containerd[1504]: 2025-05-17 00:24:36.839 [INFO][5584] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" iface="eth0" netns="" May 17 00:24:36.883772 containerd[1504]: 2025-05-17 00:24:36.839 [INFO][5584] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" May 17 00:24:36.883772 containerd[1504]: 2025-05-17 00:24:36.839 [INFO][5584] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" May 17 00:24:36.883772 containerd[1504]: 2025-05-17 00:24:36.869 [INFO][5591] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" HandleID="k8s-pod-network.97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" Workload="ci--4081--3--3--n--82e895e080-k8s-whisker--955f4745b--jt295-eth0" May 17 00:24:36.883772 containerd[1504]: 2025-05-17 00:24:36.869 [INFO][5591] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:36.883772 containerd[1504]: 2025-05-17 00:24:36.869 [INFO][5591] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:36.883772 containerd[1504]: 2025-05-17 00:24:36.875 [WARNING][5591] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" HandleID="k8s-pod-network.97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" Workload="ci--4081--3--3--n--82e895e080-k8s-whisker--955f4745b--jt295-eth0" May 17 00:24:36.883772 containerd[1504]: 2025-05-17 00:24:36.875 [INFO][5591] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" HandleID="k8s-pod-network.97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" Workload="ci--4081--3--3--n--82e895e080-k8s-whisker--955f4745b--jt295-eth0" May 17 00:24:36.883772 containerd[1504]: 2025-05-17 00:24:36.877 [INFO][5591] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:36.883772 containerd[1504]: 2025-05-17 00:24:36.880 [INFO][5584] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" May 17 00:24:36.884150 containerd[1504]: time="2025-05-17T00:24:36.883854401Z" level=info msg="TearDown network for sandbox \"97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b\" successfully" May 17 00:24:36.884150 containerd[1504]: time="2025-05-17T00:24:36.883880460Z" level=info msg="StopPodSandbox for \"97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b\" returns successfully" May 17 00:24:36.884349 containerd[1504]: time="2025-05-17T00:24:36.884323417Z" level=info msg="RemovePodSandbox for \"97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b\"" May 17 00:24:36.884382 containerd[1504]: time="2025-05-17T00:24:36.884351640Z" level=info msg="Forcibly stopping sandbox \"97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b\"" May 17 00:24:36.960688 containerd[1504]: 2025-05-17 00:24:36.918 [WARNING][5605] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" WorkloadEndpoint="ci--4081--3--3--n--82e895e080-k8s-whisker--955f4745b--jt295-eth0" May 17 00:24:36.960688 containerd[1504]: 2025-05-17 00:24:36.918 [INFO][5605] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" May 17 00:24:36.960688 containerd[1504]: 2025-05-17 00:24:36.918 [INFO][5605] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" iface="eth0" netns="" May 17 00:24:36.960688 containerd[1504]: 2025-05-17 00:24:36.918 [INFO][5605] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" May 17 00:24:36.960688 containerd[1504]: 2025-05-17 00:24:36.918 [INFO][5605] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" May 17 00:24:36.960688 containerd[1504]: 2025-05-17 00:24:36.940 [INFO][5612] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" HandleID="k8s-pod-network.97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" Workload="ci--4081--3--3--n--82e895e080-k8s-whisker--955f4745b--jt295-eth0" May 17 00:24:36.960688 containerd[1504]: 2025-05-17 00:24:36.941 [INFO][5612] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:36.960688 containerd[1504]: 2025-05-17 00:24:36.941 [INFO][5612] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:36.960688 containerd[1504]: 2025-05-17 00:24:36.948 [WARNING][5612] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" HandleID="k8s-pod-network.97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" Workload="ci--4081--3--3--n--82e895e080-k8s-whisker--955f4745b--jt295-eth0" May 17 00:24:36.960688 containerd[1504]: 2025-05-17 00:24:36.948 [INFO][5612] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" HandleID="k8s-pod-network.97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" Workload="ci--4081--3--3--n--82e895e080-k8s-whisker--955f4745b--jt295-eth0" May 17 00:24:36.960688 containerd[1504]: 2025-05-17 00:24:36.951 [INFO][5612] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:36.960688 containerd[1504]: 2025-05-17 00:24:36.957 [INFO][5605] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b" May 17 00:24:36.960688 containerd[1504]: time="2025-05-17T00:24:36.960655820Z" level=info msg="TearDown network for sandbox \"97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b\" successfully" May 17 00:24:36.966272 containerd[1504]: time="2025-05-17T00:24:36.964677439Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:24:36.966272 containerd[1504]: time="2025-05-17T00:24:36.964741279Z" level=info msg="RemovePodSandbox \"97063cbd442df36c0cd1e6efe9a76cb57e78f8994f035651205385670295847b\" returns successfully" May 17 00:24:44.406625 kubelet[2693]: E0517 00:24:44.406473 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-f599d797d-pw8hv" podUID="067226bb-cfc6-4f82-99de-aac7391d466d" May 17 00:24:46.379913 kubelet[2693]: E0517 00:24:46.379839 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-9tht4" podUID="1bde9b24-cd69-4946-af9c-950fec8a6c4b" May 17 00:24:49.310512 systemd[1]: run-containerd-runc-k8s.io-d366aad508f442c8e3e79b8161aface43f02c9bafe41c3c1b5d851afec0f2770-runc.Ab0rEZ.mount: Deactivated successfully. May 17 00:24:50.672797 systemd[1]: run-containerd-runc-k8s.io-0a71b78f07749fbb624b6f0825e47d4fd2314cbc9826916740747daf62630ec9-runc.Dvi5k7.mount: Deactivated successfully. May 17 00:24:57.403157 containerd[1504]: time="2025-05-17T00:24:57.402634573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:24:57.743282 containerd[1504]: time="2025-05-17T00:24:57.743081193Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:24:57.746368 containerd[1504]: time="2025-05-17T00:24:57.746286665Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:24:57.751061 containerd[1504]: time="2025-05-17T00:24:57.750975440Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:24:57.755504 kubelet[2693]: E0517 00:24:57.755431 2693 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:24:57.755851 kubelet[2693]: E0517 00:24:57.755529 2693 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:24:57.756549 containerd[1504]: time="2025-05-17T00:24:57.755998019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:24:57.756642 kubelet[2693]: E0517 00:24:57.756042 2693 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f857f946af1e45ef9789d38b050b55ff,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6htsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f599d797d-pw8hv_calico-system(067226bb-cfc6-4f82-99de-aac7391d466d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:24:58.072229 containerd[1504]: time="2025-05-17T00:24:58.072003594Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:24:58.073477 containerd[1504]: time="2025-05-17T00:24:58.073181557Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:24:58.073477 containerd[1504]: time="2025-05-17T00:24:58.073266064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:24:58.073552 kubelet[2693]: E0517 00:24:58.073353 2693 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:24:58.073552 kubelet[2693]: E0517 00:24:58.073395 2693 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:24:58.074225 kubelet[2693]: E0517 00:24:58.073907 2693 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hc5qt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-9tht4_calico-system(1bde9b24-cd69-4946-af9c-950fec8a6c4b): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:24:58.074585 containerd[1504]: time="2025-05-17T00:24:58.074453384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:24:58.076855 kubelet[2693]: E0517 00:24:58.076836 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-9tht4" podUID="1bde9b24-cd69-4946-af9c-950fec8a6c4b" May 17 00:24:58.384468 containerd[1504]: time="2025-05-17T00:24:58.384427811Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:24:58.385468 containerd[1504]: time="2025-05-17T00:24:58.385391974Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:24:58.385468 containerd[1504]: time="2025-05-17T00:24:58.385429472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:24:58.385619 kubelet[2693]: E0517 00:24:58.385539 2693 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:24:58.385619 kubelet[2693]: E0517 00:24:58.385582 2693 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:24:58.385780 kubelet[2693]: E0517 00:24:58.385704 2693 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6htsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f599d797d-pw8hv_calico-system(067226bb-cfc6-4f82-99de-aac7391d466d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:24:58.387236 kubelet[2693]: E0517 00:24:58.387185 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-f599d797d-pw8hv" podUID="067226bb-cfc6-4f82-99de-aac7391d466d" May 17 00:25:11.378329 kubelet[2693]: E0517 00:25:11.378189 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-f599d797d-pw8hv" podUID="067226bb-cfc6-4f82-99de-aac7391d466d" May 17 00:25:12.376793 kubelet[2693]: E0517 00:25:12.376407 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-9tht4" podUID="1bde9b24-cd69-4946-af9c-950fec8a6c4b" May 17 00:25:16.800460 systemd[1]: run-containerd-runc-k8s.io-0a71b78f07749fbb624b6f0825e47d4fd2314cbc9826916740747daf62630ec9-runc.lCVLIA.mount: Deactivated successfully. May 17 00:25:19.138293 systemd[1]: run-containerd-runc-k8s.io-d366aad508f442c8e3e79b8161aface43f02c9bafe41c3c1b5d851afec0f2770-runc.nP2Tvo.mount: Deactivated successfully. May 17 00:25:25.376856 kubelet[2693]: E0517 00:25:25.376754 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-f599d797d-pw8hv" podUID="067226bb-cfc6-4f82-99de-aac7391d466d" May 17 00:25:27.375709 kubelet[2693]: E0517 00:25:27.375613 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-9tht4" podUID="1bde9b24-cd69-4946-af9c-950fec8a6c4b" May 17 00:25:38.387361 containerd[1504]: time="2025-05-17T00:25:38.378710170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:25:38.700680 containerd[1504]: time="2025-05-17T00:25:38.700530842Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:25:38.701749 containerd[1504]: time="2025-05-17T00:25:38.701709426Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:25:38.702499 containerd[1504]: time="2025-05-17T00:25:38.701784277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:25:38.702563 kubelet[2693]: E0517 00:25:38.701936 2693 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:25:38.702563 kubelet[2693]: E0517 00:25:38.701986 2693 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:25:38.702563 kubelet[2693]: E0517 00:25:38.702126 2693 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hc5qt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-9tht4_calico-system(1bde9b24-cd69-4946-af9c-950fec8a6c4b): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:25:38.703380 kubelet[2693]: E0517 00:25:38.703345 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-9tht4" podUID="1bde9b24-cd69-4946-af9c-950fec8a6c4b" May 17 00:25:40.376511 containerd[1504]: time="2025-05-17T00:25:40.376427801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:25:40.754910 containerd[1504]: time="2025-05-17T00:25:40.754762286Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:25:40.756102 containerd[1504]: time="2025-05-17T00:25:40.756072818Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:25:40.756549 containerd[1504]: time="2025-05-17T00:25:40.756174067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:25:40.756594 kubelet[2693]: E0517 00:25:40.756339 2693 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:25:40.756594 kubelet[2693]: E0517 00:25:40.756386 2693 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:25:40.756594 kubelet[2693]: E0517 00:25:40.756504 2693 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f857f946af1e45ef9789d38b050b55ff,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6htsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f599d797d-pw8hv_calico-system(067226bb-cfc6-4f82-99de-aac7391d466d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:25:40.759017 containerd[1504]: time="2025-05-17T00:25:40.758987391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:25:41.110479 containerd[1504]: time="2025-05-17T00:25:41.110399738Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:25:41.111617 containerd[1504]: time="2025-05-17T00:25:41.111571500Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:25:41.111617 containerd[1504]: time="2025-05-17T00:25:41.111644698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:25:41.111917 kubelet[2693]: E0517 00:25:41.111814 2693 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:25:41.111917 kubelet[2693]: E0517 00:25:41.111870 2693 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:25:41.112079 kubelet[2693]: E0517 00:25:41.112000 2693 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6htsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f599d797d-pw8hv_calico-system(067226bb-cfc6-4f82-99de-aac7391d466d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:25:41.117740 kubelet[2693]: E0517 00:25:41.113346 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-f599d797d-pw8hv" podUID="067226bb-cfc6-4f82-99de-aac7391d466d" May 17 00:25:50.376849 kubelet[2693]: E0517 00:25:50.376795 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-9tht4" podUID="1bde9b24-cd69-4946-af9c-950fec8a6c4b" May 17 00:25:50.689631 systemd[1]: run-containerd-runc-k8s.io-0a71b78f07749fbb624b6f0825e47d4fd2314cbc9826916740747daf62630ec9-runc.6ZYUk5.mount: Deactivated successfully. May 17 00:25:54.391747 kubelet[2693]: E0517 00:25:54.391606 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-f599d797d-pw8hv" podUID="067226bb-cfc6-4f82-99de-aac7391d466d" May 17 00:26:04.377295 kubelet[2693]: E0517 00:26:04.376759 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-9tht4" podUID="1bde9b24-cd69-4946-af9c-950fec8a6c4b" May 17 00:26:07.381506 kubelet[2693]: E0517 00:26:07.381389 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-f599d797d-pw8hv" podUID="067226bb-cfc6-4f82-99de-aac7391d466d" May 17 00:26:15.375663 kubelet[2693]: E0517 00:26:15.375593 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-9tht4" podUID="1bde9b24-cd69-4946-af9c-950fec8a6c4b" May 17 00:26:16.797900 systemd[1]: run-containerd-runc-k8s.io-0a71b78f07749fbb624b6f0825e47d4fd2314cbc9826916740747daf62630ec9-runc.UETX54.mount: Deactivated successfully. May 17 00:26:18.377113 kubelet[2693]: E0517 00:26:18.377064 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-f599d797d-pw8hv" podUID="067226bb-cfc6-4f82-99de-aac7391d466d" May 17 00:26:19.137790 systemd[1]: run-containerd-runc-k8s.io-d366aad508f442c8e3e79b8161aface43f02c9bafe41c3c1b5d851afec0f2770-runc.NSZtD7.mount: Deactivated successfully. May 17 00:26:27.376007 kubelet[2693]: E0517 00:26:27.375942 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-9tht4" podUID="1bde9b24-cd69-4946-af9c-950fec8a6c4b" May 17 00:26:33.376853 kubelet[2693]: E0517 00:26:33.376742 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-f599d797d-pw8hv" podUID="067226bb-cfc6-4f82-99de-aac7391d466d" May 17 00:26:38.385867 kubelet[2693]: E0517 00:26:38.385702 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-9tht4" podUID="1bde9b24-cd69-4946-af9c-950fec8a6c4b" May 17 00:26:46.376564 kubelet[2693]: E0517 00:26:46.376420 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-f599d797d-pw8hv" podUID="067226bb-cfc6-4f82-99de-aac7391d466d" May 17 00:26:51.375208 kubelet[2693]: E0517 00:26:51.375136 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-9tht4" podUID="1bde9b24-cd69-4946-af9c-950fec8a6c4b" May 17 00:26:59.376137 kubelet[2693]: E0517 00:26:59.375973 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-f599d797d-pw8hv" podUID="067226bb-cfc6-4f82-99de-aac7391d466d" May 17 00:27:03.376396 containerd[1504]: time="2025-05-17T00:27:03.376156356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:27:03.681727 containerd[1504]: time="2025-05-17T00:27:03.681528976Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:27:03.682685 containerd[1504]: time="2025-05-17T00:27:03.682618598Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:27:03.682761 containerd[1504]: time="2025-05-17T00:27:03.682721351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:27:03.683341 kubelet[2693]: E0517 00:27:03.682887 2693 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:27:03.683341 kubelet[2693]: E0517 00:27:03.682938 2693 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:27:03.683341 kubelet[2693]: E0517 00:27:03.683078 2693 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hc5qt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-9tht4_calico-system(1bde9b24-cd69-4946-af9c-950fec8a6c4b): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:27:03.684442 kubelet[2693]: E0517 00:27:03.684400 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-9tht4" podUID="1bde9b24-cd69-4946-af9c-950fec8a6c4b" May 17 00:27:13.376096 containerd[1504]: time="2025-05-17T00:27:13.376042553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:27:13.681500 containerd[1504]: time="2025-05-17T00:27:13.680630789Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:27:13.682651 containerd[1504]: time="2025-05-17T00:27:13.682584980Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:27:13.682743 containerd[1504]: time="2025-05-17T00:27:13.682701138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:27:13.683177 kubelet[2693]: E0517 00:27:13.682849 2693 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:27:13.683177 kubelet[2693]: E0517 00:27:13.682930 2693 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:27:13.683177 kubelet[2693]: E0517 00:27:13.683084 2693 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f857f946af1e45ef9789d38b050b55ff,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6htsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f599d797d-pw8hv_calico-system(067226bb-cfc6-4f82-99de-aac7391d466d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:27:13.685369 containerd[1504]: time="2025-05-17T00:27:13.685320174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:27:13.997033 containerd[1504]: time="2025-05-17T00:27:13.996402075Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:27:13.997951 containerd[1504]: time="2025-05-17T00:27:13.997805484Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:27:13.997951 containerd[1504]: time="2025-05-17T00:27:13.997889993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:27:13.998338 kubelet[2693]: E0517 00:27:13.998067 2693 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:27:13.998338 kubelet[2693]: E0517 00:27:13.998137 2693 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:27:13.998338 kubelet[2693]: E0517 00:27:13.998290 2693 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6htsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f599d797d-pw8hv_calico-system(067226bb-cfc6-4f82-99de-aac7391d466d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:27:13.999709 kubelet[2693]: E0517 00:27:13.999664 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-f599d797d-pw8hv" podUID="067226bb-cfc6-4f82-99de-aac7391d466d" May 17 00:27:16.377096 kubelet[2693]: E0517 00:27:16.376475 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-9tht4" podUID="1bde9b24-cd69-4946-af9c-950fec8a6c4b" May 17 00:27:20.668973 systemd[1]: run-containerd-runc-k8s.io-0a71b78f07749fbb624b6f0825e47d4fd2314cbc9826916740747daf62630ec9-runc.T7bdaU.mount: Deactivated successfully. May 17 00:27:27.376586 kubelet[2693]: E0517 00:27:27.376378 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-f599d797d-pw8hv" podUID="067226bb-cfc6-4f82-99de-aac7391d466d" May 17 00:27:28.376410 kubelet[2693]: E0517 00:27:28.376299 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-9tht4" podUID="1bde9b24-cd69-4946-af9c-950fec8a6c4b" May 17 00:27:38.385353 kubelet[2693]: E0517 00:27:38.384964 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-f599d797d-pw8hv" podUID="067226bb-cfc6-4f82-99de-aac7391d466d" May 17 00:27:39.375608 kubelet[2693]: E0517 00:27:39.375561 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-9tht4" podUID="1bde9b24-cd69-4946-af9c-950fec8a6c4b" May 17 00:27:52.376283 kubelet[2693]: E0517 00:27:52.376067 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-f599d797d-pw8hv" podUID="067226bb-cfc6-4f82-99de-aac7391d466d" May 17 00:27:53.380807 kubelet[2693]: E0517 00:27:53.380698 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-9tht4" podUID="1bde9b24-cd69-4946-af9c-950fec8a6c4b" May 17 00:28:07.375331 kubelet[2693]: E0517 00:28:07.375206 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-9tht4" podUID="1bde9b24-cd69-4946-af9c-950fec8a6c4b" May 17 00:28:07.376669 kubelet[2693]: E0517 00:28:07.376365 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-f599d797d-pw8hv" podUID="067226bb-cfc6-4f82-99de-aac7391d466d" May 17 00:28:19.137925 systemd[1]: run-containerd-runc-k8s.io-d366aad508f442c8e3e79b8161aface43f02c9bafe41c3c1b5d851afec0f2770-runc.R49Q6J.mount: Deactivated successfully. May 17 00:28:19.376479 kubelet[2693]: E0517 00:28:19.376044 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-9tht4" podUID="1bde9b24-cd69-4946-af9c-950fec8a6c4b" May 17 00:28:22.378998 kubelet[2693]: E0517 00:28:22.378944 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-f599d797d-pw8hv" podUID="067226bb-cfc6-4f82-99de-aac7391d466d" May 17 00:28:25.045563 systemd[1]: Started sshd@7-37.27.204.183:22-139.178.89.65:48862.service - OpenSSH per-connection server daemon (139.178.89.65:48862). May 17 00:28:26.046424 sshd[6127]: Accepted publickey for core from 139.178.89.65 port 48862 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:28:26.050561 sshd[6127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:28:26.063066 systemd-logind[1483]: New session 8 of user core. May 17 00:28:26.070091 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 00:28:27.200226 sshd[6127]: pam_unix(sshd:session): session closed for user core May 17 00:28:27.205279 systemd-logind[1483]: Session 8 logged out. Waiting for processes to exit. May 17 00:28:27.205880 systemd[1]: sshd@7-37.27.204.183:22-139.178.89.65:48862.service: Deactivated successfully. May 17 00:28:27.208770 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:28:27.210186 systemd-logind[1483]: Removed session 8. May 17 00:28:31.419487 kubelet[2693]: E0517 00:28:31.401851 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-9tht4" podUID="1bde9b24-cd69-4946-af9c-950fec8a6c4b" May 17 00:28:32.380337 systemd[1]: Started sshd@8-37.27.204.183:22-139.178.89.65:52496.service - OpenSSH per-connection server daemon (139.178.89.65:52496). May 17 00:28:33.391135 sshd[6141]: Accepted publickey for core from 139.178.89.65 port 52496 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:28:33.394635 sshd[6141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:28:33.399479 systemd-logind[1483]: New session 9 of user core. May 17 00:28:33.405435 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 00:28:34.335527 sshd[6141]: pam_unix(sshd:session): session closed for user core May 17 00:28:34.340460 systemd[1]: sshd@8-37.27.204.183:22-139.178.89.65:52496.service: Deactivated successfully. May 17 00:28:34.343716 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:28:34.346663 systemd-logind[1483]: Session 9 logged out. Waiting for processes to exit. May 17 00:28:34.347987 systemd-logind[1483]: Removed session 9. May 17 00:28:35.381674 kubelet[2693]: E0517 00:28:35.381613 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-f599d797d-pw8hv" podUID="067226bb-cfc6-4f82-99de-aac7391d466d" May 17 00:28:39.505481 systemd[1]: Started sshd@9-37.27.204.183:22-139.178.89.65:54686.service - OpenSSH per-connection server daemon (139.178.89.65:54686). May 17 00:28:40.522927 sshd[6157]: Accepted publickey for core from 139.178.89.65 port 54686 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:28:40.526704 sshd[6157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:28:40.534422 systemd-logind[1483]: New session 10 of user core. May 17 00:28:40.541551 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 00:28:41.311120 sshd[6157]: pam_unix(sshd:session): session closed for user core May 17 00:28:41.315943 systemd[1]: sshd@9-37.27.204.183:22-139.178.89.65:54686.service: Deactivated successfully. May 17 00:28:41.318282 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:28:41.320281 systemd-logind[1483]: Session 10 logged out. Waiting for processes to exit. May 17 00:28:41.321594 systemd-logind[1483]: Removed session 10. May 17 00:28:41.476905 systemd[1]: Started sshd@10-37.27.204.183:22-139.178.89.65:54690.service - OpenSSH per-connection server daemon (139.178.89.65:54690). May 17 00:28:42.444344 sshd[6171]: Accepted publickey for core from 139.178.89.65 port 54690 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:28:42.446502 sshd[6171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:28:42.451367 systemd-logind[1483]: New session 11 of user core. May 17 00:28:42.454407 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 00:28:43.248907 sshd[6171]: pam_unix(sshd:session): session closed for user core May 17 00:28:43.251743 systemd[1]: sshd@10-37.27.204.183:22-139.178.89.65:54690.service: Deactivated successfully. May 17 00:28:43.253774 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:28:43.255478 systemd-logind[1483]: Session 11 logged out. Waiting for processes to exit. May 17 00:28:43.256756 systemd-logind[1483]: Removed session 11. May 17 00:28:43.415055 systemd[1]: Started sshd@11-37.27.204.183:22-139.178.89.65:54702.service - OpenSSH per-connection server daemon (139.178.89.65:54702). May 17 00:28:44.392544 sshd[6188]: Accepted publickey for core from 139.178.89.65 port 54702 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:28:44.394150 sshd[6188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:28:44.398963 systemd-logind[1483]: New session 12 of user core. May 17 00:28:44.403381 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 00:28:45.156596 sshd[6188]: pam_unix(sshd:session): session closed for user core May 17 00:28:45.161223 systemd[1]: sshd@11-37.27.204.183:22-139.178.89.65:54702.service: Deactivated successfully. May 17 00:28:45.163842 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:28:45.165031 systemd-logind[1483]: Session 12 logged out. Waiting for processes to exit. May 17 00:28:45.166303 systemd-logind[1483]: Removed session 12. May 17 00:28:45.378576 kubelet[2693]: E0517 00:28:45.378521 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-9tht4" podUID="1bde9b24-cd69-4946-af9c-950fec8a6c4b" May 17 00:28:46.376982 kubelet[2693]: E0517 00:28:46.376881 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-f599d797d-pw8hv" podUID="067226bb-cfc6-4f82-99de-aac7391d466d" May 17 00:28:50.327668 systemd[1]: Started sshd@12-37.27.204.183:22-139.178.89.65:40686.service - OpenSSH per-connection server daemon (139.178.89.65:40686). May 17 00:28:50.663754 systemd[1]: run-containerd-runc-k8s.io-0a71b78f07749fbb624b6f0825e47d4fd2314cbc9826916740747daf62630ec9-runc.L3oJfE.mount: Deactivated successfully. May 17 00:28:51.313145 sshd[6223]: Accepted publickey for core from 139.178.89.65 port 40686 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:28:51.315399 sshd[6223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:28:51.320468 systemd-logind[1483]: New session 13 of user core. May 17 00:28:51.326391 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 00:28:52.059330 sshd[6223]: pam_unix(sshd:session): session closed for user core May 17 00:28:52.062999 systemd[1]: sshd@12-37.27.204.183:22-139.178.89.65:40686.service: Deactivated successfully. May 17 00:28:52.065330 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:28:52.066383 systemd-logind[1483]: Session 13 logged out. Waiting for processes to exit. May 17 00:28:52.068068 systemd-logind[1483]: Removed session 13. May 17 00:28:52.225873 systemd[1]: Started sshd@13-37.27.204.183:22-139.178.89.65:40690.service - OpenSSH per-connection server daemon (139.178.89.65:40690). May 17 00:28:53.195163 sshd[6254]: Accepted publickey for core from 139.178.89.65 port 40690 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:28:53.197085 sshd[6254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:28:53.202480 systemd-logind[1483]: New session 14 of user core. May 17 00:28:53.214475 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 00:28:54.114536 sshd[6254]: pam_unix(sshd:session): session closed for user core May 17 00:28:54.121324 systemd[1]: sshd@13-37.27.204.183:22-139.178.89.65:40690.service: Deactivated successfully. May 17 00:28:54.123640 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:28:54.124740 systemd-logind[1483]: Session 14 logged out. Waiting for processes to exit. May 17 00:28:54.125759 systemd-logind[1483]: Removed session 14. May 17 00:28:54.281709 systemd[1]: Started sshd@14-37.27.204.183:22-139.178.89.65:40706.service - OpenSSH per-connection server daemon (139.178.89.65:40706). May 17 00:28:55.280456 sshd[6279]: Accepted publickey for core from 139.178.89.65 port 40706 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:28:55.283505 sshd[6279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:28:55.289941 systemd-logind[1483]: New session 15 of user core. May 17 00:28:55.294493 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 00:28:57.022862 sshd[6279]: pam_unix(sshd:session): session closed for user core May 17 00:28:57.031911 systemd-logind[1483]: Session 15 logged out. Waiting for processes to exit. May 17 00:28:57.032810 systemd[1]: sshd@14-37.27.204.183:22-139.178.89.65:40706.service: Deactivated successfully. May 17 00:28:57.035455 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:28:57.036835 systemd-logind[1483]: Removed session 15. May 17 00:28:57.190765 systemd[1]: Started sshd@15-37.27.204.183:22-139.178.89.65:38422.service - OpenSSH per-connection server daemon (139.178.89.65:38422). May 17 00:28:58.175303 sshd[6307]: Accepted publickey for core from 139.178.89.65 port 38422 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:28:58.175818 sshd[6307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:28:58.182704 systemd-logind[1483]: New session 16 of user core. May 17 00:28:58.187362 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 00:28:58.378839 kubelet[2693]: E0517 00:28:58.378791 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-f599d797d-pw8hv" podUID="067226bb-cfc6-4f82-99de-aac7391d466d" May 17 00:28:59.232519 sshd[6307]: pam_unix(sshd:session): session closed for user core May 17 00:28:59.234937 systemd-logind[1483]: Session 16 logged out. Waiting for processes to exit. May 17 00:28:59.236328 systemd[1]: sshd@15-37.27.204.183:22-139.178.89.65:38422.service: Deactivated successfully. May 17 00:28:59.239047 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:28:59.240225 systemd-logind[1483]: Removed session 16. May 17 00:28:59.401559 systemd[1]: Started sshd@16-37.27.204.183:22-139.178.89.65:38436.service - OpenSSH per-connection server daemon (139.178.89.65:38436). May 17 00:29:00.372225 sshd[6318]: Accepted publickey for core from 139.178.89.65 port 38436 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:29:00.375877 sshd[6318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:29:00.376615 kubelet[2693]: E0517 00:29:00.376504 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-9tht4" podUID="1bde9b24-cd69-4946-af9c-950fec8a6c4b" May 17 00:29:00.384630 systemd-logind[1483]: New session 17 of user core. May 17 00:29:00.388386 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 00:29:01.158492 sshd[6318]: pam_unix(sshd:session): session closed for user core May 17 00:29:01.168829 systemd-logind[1483]: Session 17 logged out. Waiting for processes to exit. May 17 00:29:01.171561 systemd[1]: sshd@16-37.27.204.183:22-139.178.89.65:38436.service: Deactivated successfully. May 17 00:29:01.172859 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:29:01.176826 systemd-logind[1483]: Removed session 17. May 17 00:29:06.330687 systemd[1]: Started sshd@17-37.27.204.183:22-139.178.89.65:38452.service - OpenSSH per-connection server daemon (139.178.89.65:38452). May 17 00:29:07.337911 sshd[6333]: Accepted publickey for core from 139.178.89.65 port 38452 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:29:07.339869 sshd[6333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:29:07.344722 systemd-logind[1483]: New session 18 of user core. May 17 00:29:07.351440 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 00:29:08.261455 sshd[6333]: pam_unix(sshd:session): session closed for user core May 17 00:29:08.277367 systemd[1]: sshd@17-37.27.204.183:22-139.178.89.65:38452.service: Deactivated successfully. May 17 00:29:08.280492 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:29:08.281774 systemd-logind[1483]: Session 18 logged out. Waiting for processes to exit. May 17 00:29:08.283891 systemd-logind[1483]: Removed session 18. May 17 00:29:10.376760 kubelet[2693]: E0517 00:29:10.376495 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-f599d797d-pw8hv" podUID="067226bb-cfc6-4f82-99de-aac7391d466d" May 17 00:29:13.409678 kubelet[2693]: E0517 00:29:13.409621 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-9tht4" podUID="1bde9b24-cd69-4946-af9c-950fec8a6c4b" May 17 00:29:13.429755 systemd[1]: Started sshd@18-37.27.204.183:22-139.178.89.65:36912.service - OpenSSH per-connection server daemon (139.178.89.65:36912). May 17 00:29:14.413084 sshd[6348]: Accepted publickey for core from 139.178.89.65 port 36912 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:29:14.415029 sshd[6348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:29:14.419836 systemd-logind[1483]: New session 19 of user core. May 17 00:29:14.425422 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 00:29:15.192709 sshd[6348]: pam_unix(sshd:session): session closed for user core May 17 00:29:15.195504 systemd[1]: sshd@18-37.27.204.183:22-139.178.89.65:36912.service: Deactivated successfully. May 17 00:29:15.197359 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:29:15.198528 systemd-logind[1483]: Session 19 logged out. Waiting for processes to exit. May 17 00:29:15.199683 systemd-logind[1483]: Removed session 19. May 17 00:29:19.142836 systemd[1]: run-containerd-runc-k8s.io-d366aad508f442c8e3e79b8161aface43f02c9bafe41c3c1b5d851afec0f2770-runc.kmirvp.mount: Deactivated successfully. May 17 00:29:24.377337 kubelet[2693]: E0517 00:29:24.377259 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-f599d797d-pw8hv" podUID="067226bb-cfc6-4f82-99de-aac7391d466d" May 17 00:29:25.375734 kubelet[2693]: E0517 00:29:25.375671 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-9tht4" podUID="1bde9b24-cd69-4946-af9c-950fec8a6c4b" May 17 00:29:30.954986 systemd[1]: cri-containerd-154696a55a29b6e2212cbee42e0802b76d596d120b0b8518d9291c47d9dad359.scope: Deactivated successfully. May 17 00:29:30.955644 systemd[1]: cri-containerd-154696a55a29b6e2212cbee42e0802b76d596d120b0b8518d9291c47d9dad359.scope: Consumed 5.111s CPU time, 23.8M memory peak, 0B memory swap peak. May 17 00:29:30.977551 systemd[1]: cri-containerd-19a635ec77fddfeb38c578db7f113cb8477149c7ea14408d03dcb76d6f6ec060.scope: Deactivated successfully. May 17 00:29:30.979839 systemd[1]: cri-containerd-19a635ec77fddfeb38c578db7f113cb8477149c7ea14408d03dcb76d6f6ec060.scope: Consumed 14.072s CPU time. May 17 00:29:31.168203 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19a635ec77fddfeb38c578db7f113cb8477149c7ea14408d03dcb76d6f6ec060-rootfs.mount: Deactivated successfully. May 17 00:29:31.169467 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-154696a55a29b6e2212cbee42e0802b76d596d120b0b8518d9291c47d9dad359-rootfs.mount: Deactivated successfully. May 17 00:29:31.200885 containerd[1504]: time="2025-05-17T00:29:31.198746388Z" level=info msg="shim disconnected" id=154696a55a29b6e2212cbee42e0802b76d596d120b0b8518d9291c47d9dad359 namespace=k8s.io May 17 00:29:31.200885 containerd[1504]: time="2025-05-17T00:29:31.200868290Z" level=warning msg="cleaning up after shim disconnected" id=154696a55a29b6e2212cbee42e0802b76d596d120b0b8518d9291c47d9dad359 namespace=k8s.io May 17 00:29:31.200885 containerd[1504]: time="2025-05-17T00:29:31.200885021Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:29:31.202641 containerd[1504]: time="2025-05-17T00:29:31.199761789Z" level=info msg="shim disconnected" id=19a635ec77fddfeb38c578db7f113cb8477149c7ea14408d03dcb76d6f6ec060 namespace=k8s.io May 17 00:29:31.202641 containerd[1504]: time="2025-05-17T00:29:31.201043409Z" level=warning msg="cleaning up after shim disconnected" id=19a635ec77fddfeb38c578db7f113cb8477149c7ea14408d03dcb76d6f6ec060 namespace=k8s.io May 17 00:29:31.202641 containerd[1504]: time="2025-05-17T00:29:31.201049850Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:29:31.252719 kubelet[2693]: E0517 00:29:31.251531 2693 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:42870->10.0.0.2:2379: read: connection timed out" May 17 00:29:31.286118 containerd[1504]: time="2025-05-17T00:29:31.286019977Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:29:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 17 00:29:31.286411 containerd[1504]: time="2025-05-17T00:29:31.286294040Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:29:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 17 00:29:31.833538 kubelet[2693]: I0517 00:29:31.833473 2693 scope.go:117] "RemoveContainer" containerID="19a635ec77fddfeb38c578db7f113cb8477149c7ea14408d03dcb76d6f6ec060" May 17 00:29:31.834035 kubelet[2693]: I0517 00:29:31.833644 2693 scope.go:117] "RemoveContainer" containerID="154696a55a29b6e2212cbee42e0802b76d596d120b0b8518d9291c47d9dad359" May 17 00:29:31.865684 containerd[1504]: time="2025-05-17T00:29:31.865490485Z" level=info msg="CreateContainer within sandbox \"38529f19f5076f3e37bcecfeb6f1c127e9bee5c3334651d21676765f64c123be\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 17 00:29:31.866453 containerd[1504]: time="2025-05-17T00:29:31.866379970Z" level=info msg="CreateContainer within sandbox \"cf441aaaa4351fed2c48748dc05d9d6f319ddb4fe8b14351f001aef6f1b261b7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" May 17 00:29:32.040575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2677122122.mount: Deactivated successfully. May 17 00:29:32.066182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount693398010.mount: Deactivated successfully. May 17 00:29:32.074032 containerd[1504]: time="2025-05-17T00:29:32.073964071Z" level=info msg="CreateContainer within sandbox \"38529f19f5076f3e37bcecfeb6f1c127e9bee5c3334651d21676765f64c123be\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"52b5045fdaf0eb473ee6ed470bc9cb0968fc479de971420b744496c72f70e77b\"" May 17 00:29:32.080350 containerd[1504]: time="2025-05-17T00:29:32.079643604Z" level=info msg="CreateContainer within sandbox \"cf441aaaa4351fed2c48748dc05d9d6f319ddb4fe8b14351f001aef6f1b261b7\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"a32a9f79dc14b1b631ccfe5fa7cad87a7a693128068db2f92f0bb0ac5166ceac\"" May 17 00:29:32.081604 containerd[1504]: time="2025-05-17T00:29:32.080777217Z" level=info msg="StartContainer for \"a32a9f79dc14b1b631ccfe5fa7cad87a7a693128068db2f92f0bb0ac5166ceac\"" May 17 00:29:32.097980 containerd[1504]: time="2025-05-17T00:29:32.097506378Z" level=info msg="StartContainer for \"52b5045fdaf0eb473ee6ed470bc9cb0968fc479de971420b744496c72f70e77b\"" May 17 00:29:32.120591 systemd[1]: Started cri-containerd-a32a9f79dc14b1b631ccfe5fa7cad87a7a693128068db2f92f0bb0ac5166ceac.scope - libcontainer container a32a9f79dc14b1b631ccfe5fa7cad87a7a693128068db2f92f0bb0ac5166ceac. May 17 00:29:32.141464 systemd[1]: Started cri-containerd-52b5045fdaf0eb473ee6ed470bc9cb0968fc479de971420b744496c72f70e77b.scope - libcontainer container 52b5045fdaf0eb473ee6ed470bc9cb0968fc479de971420b744496c72f70e77b. May 17 00:29:32.167190 containerd[1504]: time="2025-05-17T00:29:32.167107198Z" level=info msg="StartContainer for \"a32a9f79dc14b1b631ccfe5fa7cad87a7a693128068db2f92f0bb0ac5166ceac\" returns successfully" May 17 00:29:32.202535 containerd[1504]: time="2025-05-17T00:29:32.202463134Z" level=info msg="StartContainer for \"52b5045fdaf0eb473ee6ed470bc9cb0968fc479de971420b744496c72f70e77b\" returns successfully" May 17 00:29:34.883961 kubelet[2693]: E0517 00:29:34.858799 2693 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:42662->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{whisker-f599d797d-pw8hv.184028be9f6177c3 calico-system 1384 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:whisker-f599d797d-pw8hv,UID:067226bb-cfc6-4f82-99de-aac7391d466d,APIVersion:v1,ResourceVersion:870,FieldPath:spec.containers{whisker},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/whisker:v3.30.0\",Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-82e895e080,},FirstTimestamp:2025-05-17 00:24:14 +0000 UTC,LastTimestamp:2025-05-17 00:29:24.376413727 +0000 UTC m=+350.109965609,Count:20,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-82e895e080,}" May 17 00:29:36.265687 systemd[1]: cri-containerd-5ee9258fbbec14724b3337ae5d874c02c2264de7a62e8dc13e72ea0e090ab9b5.scope: Deactivated successfully. May 17 00:29:36.266011 systemd[1]: cri-containerd-5ee9258fbbec14724b3337ae5d874c02c2264de7a62e8dc13e72ea0e090ab9b5.scope: Consumed 2.999s CPU time, 20.5M memory peak, 0B memory swap peak. May 17 00:29:36.297984 containerd[1504]: time="2025-05-17T00:29:36.296665449Z" level=info msg="shim disconnected" id=5ee9258fbbec14724b3337ae5d874c02c2264de7a62e8dc13e72ea0e090ab9b5 namespace=k8s.io May 17 00:29:36.297984 containerd[1504]: time="2025-05-17T00:29:36.296744728Z" level=warning msg="cleaning up after shim disconnected" id=5ee9258fbbec14724b3337ae5d874c02c2264de7a62e8dc13e72ea0e090ab9b5 namespace=k8s.io May 17 00:29:36.297984 containerd[1504]: time="2025-05-17T00:29:36.296755969Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:29:36.298960 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ee9258fbbec14724b3337ae5d874c02c2264de7a62e8dc13e72ea0e090ab9b5-rootfs.mount: Deactivated successfully. May 17 00:29:36.316640 containerd[1504]: time="2025-05-17T00:29:36.316527215Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:29:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 17 00:29:36.829660 kubelet[2693]: I0517 00:29:36.829617 2693 scope.go:117] "RemoveContainer" containerID="5ee9258fbbec14724b3337ae5d874c02c2264de7a62e8dc13e72ea0e090ab9b5" May 17 00:29:36.831878 containerd[1504]: time="2025-05-17T00:29:36.831831662Z" level=info msg="CreateContainer within sandbox \"df7b37f341ae9c8d84d0d931efbe62c6e2fbc8f887e5bbf83ccab4ae10cce2b6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 17 00:29:36.848430 containerd[1504]: time="2025-05-17T00:29:36.848374985Z" level=info msg="CreateContainer within sandbox \"df7b37f341ae9c8d84d0d931efbe62c6e2fbc8f887e5bbf83ccab4ae10cce2b6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"3f23282f15022f22673d3b0194ef84cf4580d81f73a27116360d8c9ac8b09e0a\"" May 17 00:29:36.849463 containerd[1504]: time="2025-05-17T00:29:36.849002781Z" level=info msg="StartContainer for \"3f23282f15022f22673d3b0194ef84cf4580d81f73a27116360d8c9ac8b09e0a\"" May 17 00:29:36.849973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount123104052.mount: Deactivated successfully. May 17 00:29:36.891538 systemd[1]: Started cri-containerd-3f23282f15022f22673d3b0194ef84cf4580d81f73a27116360d8c9ac8b09e0a.scope - libcontainer container 3f23282f15022f22673d3b0194ef84cf4580d81f73a27116360d8c9ac8b09e0a. May 17 00:29:36.942237 containerd[1504]: time="2025-05-17T00:29:36.942170244Z" level=info msg="StartContainer for \"3f23282f15022f22673d3b0194ef84cf4580d81f73a27116360d8c9ac8b09e0a\" returns successfully"