May 27 18:21:39.938587 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 27 15:32:02 -00 2025 May 27 18:21:39.938612 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 18:21:39.938622 kernel: BIOS-provided physical RAM map: May 27 18:21:39.938631 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 27 18:21:39.938639 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 27 18:21:39.938646 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 27 18:21:39.938655 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable May 27 18:21:39.938662 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved May 27 18:21:39.938670 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 18:21:39.938677 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 27 18:21:39.938701 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable May 27 18:21:39.938709 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 27 18:21:39.938719 kernel: NX (Execute Disable) protection: active May 27 18:21:39.938742 kernel: APIC: Static calls initialized May 27 18:21:39.938751 kernel: SMBIOS 3.0.0 present. May 27 18:21:39.938759 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 May 27 18:21:39.938767 kernel: DMI: Memory slots populated: 1/1 May 27 18:21:39.938777 kernel: Hypervisor detected: KVM May 27 18:21:39.938785 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 27 18:21:39.938793 kernel: kvm-clock: using sched offset of 4705398078 cycles May 27 18:21:39.938801 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 27 18:21:39.938809 kernel: tsc: Detected 1996.249 MHz processor May 27 18:21:39.938818 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 27 18:21:39.938826 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 27 18:21:39.938835 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 May 27 18:21:39.938843 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 27 18:21:39.938853 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 27 18:21:39.938861 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 May 27 18:21:39.938869 kernel: ACPI: Early table checksum verification disabled May 27 18:21:39.938877 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) May 27 18:21:39.938886 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:21:39.938894 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:21:39.938902 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:21:39.938911 kernel: ACPI: FACS 0x00000000BFFE0000 000040 May 27 18:21:39.938919 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:21:39.938928 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:21:39.938937 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] May 27 18:21:39.938945 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] May 27 18:21:39.938953 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] May 27 18:21:39.938961 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] May 27 18:21:39.938972 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] May 27 18:21:39.938981 kernel: No NUMA configuration found May 27 18:21:39.938991 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] May 27 18:21:39.939000 kernel: NODE_DATA(0) allocated [mem 0x13fff8dc0-0x13fffffff] May 27 18:21:39.939008 kernel: Zone ranges: May 27 18:21:39.939017 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 27 18:21:39.939025 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 27 18:21:39.939034 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] May 27 18:21:39.939042 kernel: Device empty May 27 18:21:39.939050 kernel: Movable zone start for each node May 27 18:21:39.939060 kernel: Early memory node ranges May 27 18:21:39.939069 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 27 18:21:39.939077 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] May 27 18:21:39.939086 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] May 27 18:21:39.939094 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] May 27 18:21:39.939103 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 18:21:39.939111 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 27 18:21:39.939120 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 27 18:21:39.939128 kernel: ACPI: PM-Timer IO Port: 0x608 May 27 18:21:39.939138 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 27 18:21:39.939147 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 27 18:21:39.939156 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 27 18:21:39.939164 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 27 18:21:39.939173 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 27 18:21:39.939181 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 27 18:21:39.939190 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 27 18:21:39.939198 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 27 18:21:39.939207 kernel: CPU topo: Max. logical packages: 2 May 27 18:21:39.939216 kernel: CPU topo: Max. logical dies: 2 May 27 18:21:39.939225 kernel: CPU topo: Max. dies per package: 1 May 27 18:21:39.939233 kernel: CPU topo: Max. threads per core: 1 May 27 18:21:39.939242 kernel: CPU topo: Num. cores per package: 1 May 27 18:21:39.939250 kernel: CPU topo: Num. threads per package: 1 May 27 18:21:39.939258 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 27 18:21:39.939267 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 27 18:21:39.939275 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices May 27 18:21:39.939284 kernel: Booting paravirtualized kernel on KVM May 27 18:21:39.939294 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 27 18:21:39.939303 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 27 18:21:39.939311 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 27 18:21:39.939320 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 27 18:21:39.939328 kernel: pcpu-alloc: [0] 0 1 May 27 18:21:39.939336 kernel: kvm-guest: PV spinlocks disabled, no host support May 27 18:21:39.939346 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 18:21:39.939355 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 18:21:39.939365 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 18:21:39.939374 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 18:21:39.939382 kernel: Fallback order for Node 0: 0 May 27 18:21:39.939391 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 May 27 18:21:39.939399 kernel: Policy zone: Normal May 27 18:21:39.939408 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 18:21:39.939416 kernel: software IO TLB: area num 2. May 27 18:21:39.939425 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 27 18:21:39.939433 kernel: ftrace: allocating 40081 entries in 157 pages May 27 18:21:39.939443 kernel: ftrace: allocated 157 pages with 5 groups May 27 18:21:39.939452 kernel: Dynamic Preempt: voluntary May 27 18:21:39.939460 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 18:21:39.939469 kernel: rcu: RCU event tracing is enabled. May 27 18:21:39.939478 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 27 18:21:39.939487 kernel: Trampoline variant of Tasks RCU enabled. May 27 18:21:39.939495 kernel: Rude variant of Tasks RCU enabled. May 27 18:21:39.939504 kernel: Tracing variant of Tasks RCU enabled. May 27 18:21:39.939513 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 18:21:39.939521 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 27 18:21:39.939532 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 18:21:39.939541 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 18:21:39.939549 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 18:21:39.939558 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 27 18:21:39.939566 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 18:21:39.939575 kernel: Console: colour VGA+ 80x25 May 27 18:21:39.939583 kernel: printk: legacy console [tty0] enabled May 27 18:21:39.939592 kernel: printk: legacy console [ttyS0] enabled May 27 18:21:39.939600 kernel: ACPI: Core revision 20240827 May 27 18:21:39.939611 kernel: APIC: Switch to symmetric I/O mode setup May 27 18:21:39.939619 kernel: x2apic enabled May 27 18:21:39.939627 kernel: APIC: Switched APIC routing to: physical x2apic May 27 18:21:39.939636 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 27 18:21:39.939645 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 27 18:21:39.939659 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) May 27 18:21:39.939670 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 27 18:21:39.939679 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 27 18:21:39.939703 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 27 18:21:39.939712 kernel: Spectre V2 : Mitigation: Retpolines May 27 18:21:39.939721 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 27 18:21:39.939733 kernel: Speculative Store Bypass: Vulnerable May 27 18:21:39.939742 kernel: x86/fpu: x87 FPU will use FXSAVE May 27 18:21:39.939750 kernel: Freeing SMP alternatives memory: 32K May 27 18:21:39.939759 kernel: pid_max: default: 32768 minimum: 301 May 27 18:21:39.939768 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 18:21:39.939779 kernel: landlock: Up and running. May 27 18:21:39.939788 kernel: SELinux: Initializing. May 27 18:21:39.939796 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 18:21:39.939805 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 18:21:39.939815 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) May 27 18:21:39.939824 kernel: Performance Events: AMD PMU driver. May 27 18:21:39.939832 kernel: ... version: 0 May 27 18:21:39.939841 kernel: ... bit width: 48 May 27 18:21:39.939850 kernel: ... generic registers: 4 May 27 18:21:39.939860 kernel: ... value mask: 0000ffffffffffff May 27 18:21:39.939869 kernel: ... max period: 00007fffffffffff May 27 18:21:39.939878 kernel: ... fixed-purpose events: 0 May 27 18:21:39.939887 kernel: ... event mask: 000000000000000f May 27 18:21:39.939896 kernel: signal: max sigframe size: 1440 May 27 18:21:39.939905 kernel: rcu: Hierarchical SRCU implementation. May 27 18:21:39.939914 kernel: rcu: Max phase no-delay instances is 400. May 27 18:21:39.939923 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 18:21:39.939932 kernel: smp: Bringing up secondary CPUs ... May 27 18:21:39.939941 kernel: smpboot: x86: Booting SMP configuration: May 27 18:21:39.939951 kernel: .... node #0, CPUs: #1 May 27 18:21:39.939960 kernel: smp: Brought up 1 node, 2 CPUs May 27 18:21:39.939969 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) May 27 18:21:39.939979 kernel: Memory: 3962040K/4193772K available (14336K kernel code, 2430K rwdata, 9952K rodata, 54416K init, 2552K bss, 227284K reserved, 0K cma-reserved) May 27 18:21:39.939988 kernel: devtmpfs: initialized May 27 18:21:39.939997 kernel: x86/mm: Memory block size: 128MB May 27 18:21:39.940006 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 18:21:39.940015 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 27 18:21:39.940024 kernel: pinctrl core: initialized pinctrl subsystem May 27 18:21:39.940034 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 18:21:39.940043 kernel: audit: initializing netlink subsys (disabled) May 27 18:21:39.940052 kernel: audit: type=2000 audit(1748370096.292:1): state=initialized audit_enabled=0 res=1 May 27 18:21:39.940061 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 18:21:39.940070 kernel: thermal_sys: Registered thermal governor 'user_space' May 27 18:21:39.940079 kernel: cpuidle: using governor menu May 27 18:21:39.940088 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 18:21:39.940096 kernel: dca service started, version 1.12.1 May 27 18:21:39.940105 kernel: PCI: Using configuration type 1 for base access May 27 18:21:39.940116 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 27 18:21:39.940125 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 18:21:39.940134 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 27 18:21:39.940143 kernel: ACPI: Added _OSI(Module Device) May 27 18:21:39.940152 kernel: ACPI: Added _OSI(Processor Device) May 27 18:21:39.940161 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 18:21:39.940170 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 18:21:39.940179 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 18:21:39.940188 kernel: ACPI: Interpreter enabled May 27 18:21:39.940199 kernel: ACPI: PM: (supports S0 S3 S5) May 27 18:21:39.940207 kernel: ACPI: Using IOAPIC for interrupt routing May 27 18:21:39.940216 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 27 18:21:39.940225 kernel: PCI: Using E820 reservations for host bridge windows May 27 18:21:39.940234 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 27 18:21:39.940243 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 18:21:39.940377 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 27 18:21:39.940471 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 27 18:21:39.940564 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 27 18:21:39.940578 kernel: acpiphp: Slot [3] registered May 27 18:21:39.940587 kernel: acpiphp: Slot [4] registered May 27 18:21:39.940597 kernel: acpiphp: Slot [5] registered May 27 18:21:39.940606 kernel: acpiphp: Slot [6] registered May 27 18:21:39.940615 kernel: acpiphp: Slot [7] registered May 27 18:21:39.940624 kernel: acpiphp: Slot [8] registered May 27 18:21:39.940632 kernel: acpiphp: Slot [9] registered May 27 18:21:39.940645 kernel: acpiphp: Slot [10] registered May 27 18:21:39.940653 kernel: acpiphp: Slot [11] registered May 27 18:21:39.940662 kernel: acpiphp: Slot [12] registered May 27 18:21:39.940671 kernel: acpiphp: Slot [13] registered May 27 18:21:39.940775 kernel: acpiphp: Slot [14] registered May 27 18:21:39.940790 kernel: acpiphp: Slot [15] registered May 27 18:21:39.940799 kernel: acpiphp: Slot [16] registered May 27 18:21:39.940808 kernel: acpiphp: Slot [17] registered May 27 18:21:39.940816 kernel: acpiphp: Slot [18] registered May 27 18:21:39.940828 kernel: acpiphp: Slot [19] registered May 27 18:21:39.940837 kernel: acpiphp: Slot [20] registered May 27 18:21:39.940846 kernel: acpiphp: Slot [21] registered May 27 18:21:39.940855 kernel: acpiphp: Slot [22] registered May 27 18:21:39.940864 kernel: acpiphp: Slot [23] registered May 27 18:21:39.940873 kernel: acpiphp: Slot [24] registered May 27 18:21:39.940882 kernel: acpiphp: Slot [25] registered May 27 18:21:39.940890 kernel: acpiphp: Slot [26] registered May 27 18:21:39.940899 kernel: acpiphp: Slot [27] registered May 27 18:21:39.940908 kernel: acpiphp: Slot [28] registered May 27 18:21:39.940918 kernel: acpiphp: Slot [29] registered May 27 18:21:39.940927 kernel: acpiphp: Slot [30] registered May 27 18:21:39.940936 kernel: acpiphp: Slot [31] registered May 27 18:21:39.940945 kernel: PCI host bridge to bus 0000:00 May 27 18:21:39.941044 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 27 18:21:39.941127 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 27 18:21:39.941205 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 27 18:21:39.941282 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 27 18:21:39.941379 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] May 27 18:21:39.941461 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 18:21:39.941568 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint May 27 18:21:39.941700 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint May 27 18:21:39.941810 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint May 27 18:21:39.941907 kernel: pci 0000:00:01.1: BAR 4 [io 0xc120-0xc12f] May 27 18:21:39.942007 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk May 27 18:21:39.942104 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk May 27 18:21:39.942195 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk May 27 18:21:39.942288 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk May 27 18:21:39.942388 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint May 27 18:21:39.942479 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 27 18:21:39.942569 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 27 18:21:39.942663 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint May 27 18:21:39.942773 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] May 27 18:21:39.942862 kernel: pci 0000:00:02.0: BAR 2 [mem 0xc000000000-0xc000003fff 64bit pref] May 27 18:21:39.942950 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff] May 27 18:21:39.943037 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref] May 27 18:21:39.943122 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 27 18:21:39.943224 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 27 18:21:39.943334 kernel: pci 0000:00:03.0: BAR 0 [io 0xc080-0xc0bf] May 27 18:21:39.943422 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff] May 27 18:21:39.943508 kernel: pci 0000:00:03.0: BAR 4 [mem 0xc000004000-0xc000007fff 64bit pref] May 27 18:21:39.943594 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref] May 27 18:21:39.943710 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 27 18:21:39.943802 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] May 27 18:21:39.943894 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff] May 27 18:21:39.943980 kernel: pci 0000:00:04.0: BAR 4 [mem 0xc000008000-0xc00000bfff 64bit pref] May 27 18:21:39.944073 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint May 27 18:21:39.944159 kernel: pci 0000:00:05.0: BAR 0 [io 0xc0c0-0xc0ff] May 27 18:21:39.944246 kernel: pci 0000:00:05.0: BAR 4 [mem 0xc00000c000-0xc00000ffff 64bit pref] May 27 18:21:39.944339 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 27 18:21:39.944425 kernel: pci 0000:00:06.0: BAR 0 [io 0xc100-0xc11f] May 27 18:21:39.944515 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfeb93000-0xfeb93fff] May 27 18:21:39.944601 kernel: pci 0000:00:06.0: BAR 4 [mem 0xc000010000-0xc000013fff 64bit pref] May 27 18:21:39.944615 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 27 18:21:39.944624 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 27 18:21:39.944633 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 27 18:21:39.944643 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 27 18:21:39.944652 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 27 18:21:39.944661 kernel: iommu: Default domain type: Translated May 27 18:21:39.944672 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 27 18:21:39.944697 kernel: PCI: Using ACPI for IRQ routing May 27 18:21:39.944707 kernel: PCI: pci_cache_line_size set to 64 bytes May 27 18:21:39.944716 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 27 18:21:39.944725 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] May 27 18:21:39.944814 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 27 18:21:39.944900 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 27 18:21:39.944985 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 27 18:21:39.944998 kernel: vgaarb: loaded May 27 18:21:39.945011 kernel: clocksource: Switched to clocksource kvm-clock May 27 18:21:39.945020 kernel: VFS: Disk quotas dquot_6.6.0 May 27 18:21:39.945029 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 18:21:39.945038 kernel: pnp: PnP ACPI init May 27 18:21:39.945127 kernel: pnp 00:03: [dma 2] May 27 18:21:39.945142 kernel: pnp: PnP ACPI: found 5 devices May 27 18:21:39.945151 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 27 18:21:39.945160 kernel: NET: Registered PF_INET protocol family May 27 18:21:39.945172 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 18:21:39.945182 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 27 18:21:39.945191 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 18:21:39.945200 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 18:21:39.945209 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 27 18:21:39.945218 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 27 18:21:39.945227 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 18:21:39.945235 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 18:21:39.945245 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 18:21:39.945255 kernel: NET: Registered PF_XDP protocol family May 27 18:21:39.945331 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 27 18:21:39.945423 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 27 18:21:39.945503 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 27 18:21:39.945582 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] May 27 18:21:39.945663 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] May 27 18:21:39.945778 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 27 18:21:39.945873 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 27 18:21:39.945892 kernel: PCI: CLS 0 bytes, default 64 May 27 18:21:39.945902 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 27 18:21:39.945912 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) May 27 18:21:39.945921 kernel: Initialise system trusted keyrings May 27 18:21:39.945931 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 27 18:21:39.945940 kernel: Key type asymmetric registered May 27 18:21:39.945950 kernel: Asymmetric key parser 'x509' registered May 27 18:21:39.945959 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 27 18:21:39.945969 kernel: io scheduler mq-deadline registered May 27 18:21:39.945981 kernel: io scheduler kyber registered May 27 18:21:39.945990 kernel: io scheduler bfq registered May 27 18:21:39.946000 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 27 18:21:39.946010 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 27 18:21:39.946020 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 27 18:21:39.946030 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 27 18:21:39.946039 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 27 18:21:39.946049 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 18:21:39.946059 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 27 18:21:39.946071 kernel: random: crng init done May 27 18:21:39.946080 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 27 18:21:39.946090 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 27 18:21:39.946099 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 27 18:21:39.946191 kernel: rtc_cmos 00:04: RTC can wake from S4 May 27 18:21:39.946206 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 27 18:21:39.946287 kernel: rtc_cmos 00:04: registered as rtc0 May 27 18:21:39.946371 kernel: rtc_cmos 00:04: setting system clock to 2025-05-27T18:21:39 UTC (1748370099) May 27 18:21:39.946459 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 27 18:21:39.946472 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 27 18:21:39.946481 kernel: NET: Registered PF_INET6 protocol family May 27 18:21:39.946490 kernel: Segment Routing with IPv6 May 27 18:21:39.946499 kernel: In-situ OAM (IOAM) with IPv6 May 27 18:21:39.946508 kernel: NET: Registered PF_PACKET protocol family May 27 18:21:39.946517 kernel: Key type dns_resolver registered May 27 18:21:39.946526 kernel: IPI shorthand broadcast: enabled May 27 18:21:39.946535 kernel: sched_clock: Marking stable (3646007917, 185492300)->(3869863608, -38363391) May 27 18:21:39.946546 kernel: registered taskstats version 1 May 27 18:21:39.946555 kernel: Loading compiled-in X.509 certificates May 27 18:21:39.946564 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: 9507e5c390e18536b38d58c90da64baf0ac9837c' May 27 18:21:39.946573 kernel: Demotion targets for Node 0: null May 27 18:21:39.946582 kernel: Key type .fscrypt registered May 27 18:21:39.946591 kernel: Key type fscrypt-provisioning registered May 27 18:21:39.946600 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 18:21:39.946608 kernel: ima: Allocated hash algorithm: sha1 May 27 18:21:39.946619 kernel: ima: No architecture policies found May 27 18:21:39.946628 kernel: clk: Disabling unused clocks May 27 18:21:39.946636 kernel: Warning: unable to open an initial console. May 27 18:21:39.946646 kernel: Freeing unused kernel image (initmem) memory: 54416K May 27 18:21:39.946655 kernel: Write protecting the kernel read-only data: 24576k May 27 18:21:39.946664 kernel: Freeing unused kernel image (rodata/data gap) memory: 288K May 27 18:21:39.946672 kernel: Run /init as init process May 27 18:21:39.946705 kernel: with arguments: May 27 18:21:39.946715 kernel: /init May 27 18:21:39.946727 kernel: with environment: May 27 18:21:39.946735 kernel: HOME=/ May 27 18:21:39.946744 kernel: TERM=linux May 27 18:21:39.946753 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 18:21:39.946763 systemd[1]: Successfully made /usr/ read-only. May 27 18:21:39.946775 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 18:21:39.946788 systemd[1]: Detected virtualization kvm. May 27 18:21:39.946804 systemd[1]: Detected architecture x86-64. May 27 18:21:39.946815 systemd[1]: Running in initrd. May 27 18:21:39.946825 systemd[1]: No hostname configured, using default hostname. May 27 18:21:39.946835 systemd[1]: Hostname set to . May 27 18:21:39.946845 systemd[1]: Initializing machine ID from VM UUID. May 27 18:21:39.946855 systemd[1]: Queued start job for default target initrd.target. May 27 18:21:39.946864 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 18:21:39.946877 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 18:21:39.946887 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 18:21:39.946897 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 18:21:39.946907 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 18:21:39.946918 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 18:21:39.946929 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 18:21:39.946941 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 18:21:39.946951 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 18:21:39.946961 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 18:21:39.946970 systemd[1]: Reached target paths.target - Path Units. May 27 18:21:39.946980 systemd[1]: Reached target slices.target - Slice Units. May 27 18:21:39.946990 systemd[1]: Reached target swap.target - Swaps. May 27 18:21:39.947000 systemd[1]: Reached target timers.target - Timer Units. May 27 18:21:39.947010 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 18:21:39.947020 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 18:21:39.947031 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 18:21:39.947041 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 18:21:39.947051 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 18:21:39.947061 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 18:21:39.947071 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 18:21:39.947081 systemd[1]: Reached target sockets.target - Socket Units. May 27 18:21:39.947091 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 18:21:39.947100 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 18:21:39.947112 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 18:21:39.947122 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 18:21:39.947134 systemd[1]: Starting systemd-fsck-usr.service... May 27 18:21:39.947144 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 18:21:39.947154 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 18:21:39.947185 systemd-journald[214]: Collecting audit messages is disabled. May 27 18:21:39.947208 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 18:21:39.947219 systemd-journald[214]: Journal started May 27 18:21:39.947244 systemd-journald[214]: Runtime Journal (/run/log/journal/d688174c4769490fbcb704462e53bf67) is 8M, max 78.5M, 70.5M free. May 27 18:21:39.963462 systemd[1]: Started systemd-journald.service - Journal Service. May 27 18:21:39.963885 systemd-modules-load[216]: Inserted module 'overlay' May 27 18:21:39.966589 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 18:21:39.969093 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 18:21:39.976783 systemd[1]: Finished systemd-fsck-usr.service. May 27 18:21:39.984575 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 18:21:39.988772 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 18:21:40.002711 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 18:21:40.004712 kernel: Bridge firewalling registered May 27 18:21:40.004753 systemd-modules-load[216]: Inserted module 'br_netfilter' May 27 18:21:40.005552 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 18:21:40.045122 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 18:21:40.048987 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 18:21:40.055821 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 18:21:40.057110 systemd-tmpfiles[227]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 18:21:40.063805 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 18:21:40.072148 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 18:21:40.076852 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 18:21:40.086634 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 18:21:40.088802 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 18:21:40.100799 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 18:21:40.101814 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 18:21:40.104363 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 18:21:40.128019 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 18:21:40.152663 systemd-resolved[253]: Positive Trust Anchors: May 27 18:21:40.152677 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 18:21:40.152738 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 18:21:40.156861 systemd-resolved[253]: Defaulting to hostname 'linux'. May 27 18:21:40.157797 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 18:21:40.159463 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 18:21:40.221722 kernel: SCSI subsystem initialized May 27 18:21:40.231741 kernel: Loading iSCSI transport class v2.0-870. May 27 18:21:40.245760 kernel: iscsi: registered transport (tcp) May 27 18:21:40.268306 kernel: iscsi: registered transport (qla4xxx) May 27 18:21:40.268369 kernel: QLogic iSCSI HBA Driver May 27 18:21:40.292591 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 18:21:40.307675 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 18:21:40.310112 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 18:21:40.377553 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 18:21:40.379609 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 18:21:40.454003 kernel: raid6: sse2x4 gen() 5245 MB/s May 27 18:21:40.471788 kernel: raid6: sse2x2 gen() 14066 MB/s May 27 18:21:40.490190 kernel: raid6: sse2x1 gen() 9921 MB/s May 27 18:21:40.490270 kernel: raid6: using algorithm sse2x2 gen() 14066 MB/s May 27 18:21:40.509284 kernel: raid6: .... xor() 9300 MB/s, rmw enabled May 27 18:21:40.509376 kernel: raid6: using ssse3x2 recovery algorithm May 27 18:21:40.532070 kernel: xor: measuring software checksum speed May 27 18:21:40.532130 kernel: prefetch64-sse : 18503 MB/sec May 27 18:21:40.533360 kernel: generic_sse : 16828 MB/sec May 27 18:21:40.533421 kernel: xor: using function: prefetch64-sse (18503 MB/sec) May 27 18:21:40.727751 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 18:21:40.736409 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 18:21:40.739627 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 18:21:40.795539 systemd-udevd[462]: Using default interface naming scheme 'v255'. May 27 18:21:40.809102 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 18:21:40.815947 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 18:21:40.842583 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation May 27 18:21:40.878107 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 18:21:40.883044 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 18:21:40.962640 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 18:21:40.970998 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 18:21:41.066729 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues May 27 18:21:41.073011 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 27 18:21:41.076841 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) May 27 18:21:41.101563 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 18:21:41.101769 kernel: GPT:17805311 != 20971519 May 27 18:21:41.101793 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 18:21:41.101806 kernel: GPT:17805311 != 20971519 May 27 18:21:41.101818 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 18:21:41.101831 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 18:21:41.104714 kernel: libata version 3.00 loaded. May 27 18:21:41.108705 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 18:21:41.109712 kernel: ata_piix 0000:00:01.1: version 2.13 May 27 18:21:41.110330 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 18:21:41.112457 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 18:21:41.115204 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 18:21:41.117026 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 18:21:41.122716 kernel: scsi host0: ata_piix May 27 18:21:41.124910 kernel: scsi host1: ata_piix May 27 18:21:41.125037 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 lpm-pol 0 May 27 18:21:41.126941 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 lpm-pol 0 May 27 18:21:41.185079 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 27 18:21:41.203135 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 18:21:41.214618 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 27 18:21:41.223372 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 27 18:21:41.224010 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 27 18:21:41.235209 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 18:21:41.236511 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 18:21:41.259947 disk-uuid[560]: Primary Header is updated. May 27 18:21:41.259947 disk-uuid[560]: Secondary Entries is updated. May 27 18:21:41.259947 disk-uuid[560]: Secondary Header is updated. May 27 18:21:41.266800 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 18:21:41.376422 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 18:21:41.416116 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 18:21:41.416750 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 18:21:41.419040 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 18:21:41.423790 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 18:21:41.453627 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 18:21:42.283814 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 18:21:42.286403 disk-uuid[561]: The operation has completed successfully. May 27 18:21:42.367384 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 18:21:42.368222 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 18:21:42.415135 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 18:21:42.445055 sh[586]: Success May 27 18:21:42.493077 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 18:21:42.493186 kernel: device-mapper: uevent: version 1.0.3 May 27 18:21:42.496530 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 18:21:42.513798 kernel: device-mapper: verity: sha256 using shash "sha256-ssse3" May 27 18:21:42.580657 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 18:21:42.583750 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 18:21:42.600027 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 18:21:42.611284 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 18:21:42.611352 kernel: BTRFS: device fsid 7caef027-0915-4c01-a3d5-28eff70f7ebd devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (599) May 27 18:21:42.616999 kernel: BTRFS info (device dm-0): first mount of filesystem 7caef027-0915-4c01-a3d5-28eff70f7ebd May 27 18:21:42.617058 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 27 18:21:42.618799 kernel: BTRFS info (device dm-0): using free-space-tree May 27 18:21:42.635897 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 18:21:42.637852 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 18:21:42.639901 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 18:21:42.641623 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 18:21:42.645920 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 18:21:42.673744 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (622) May 27 18:21:42.682910 kernel: BTRFS info (device vda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 18:21:42.682954 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 18:21:42.687094 kernel: BTRFS info (device vda6): using free-space-tree May 27 18:21:42.704719 kernel: BTRFS info (device vda6): last unmount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 18:21:42.706133 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 18:21:42.707582 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 18:21:42.774598 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 18:21:42.778011 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 18:21:42.817657 systemd-networkd[770]: lo: Link UP May 27 18:21:42.817668 systemd-networkd[770]: lo: Gained carrier May 27 18:21:42.818861 systemd-networkd[770]: Enumeration completed May 27 18:21:42.818946 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 18:21:42.819579 systemd[1]: Reached target network.target - Network. May 27 18:21:42.823063 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 18:21:42.823067 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 18:21:42.825707 systemd-networkd[770]: eth0: Link UP May 27 18:21:42.825712 systemd-networkd[770]: eth0: Gained carrier May 27 18:21:42.825724 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 18:21:42.846496 systemd-networkd[770]: eth0: DHCPv4 address 172.24.4.229/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 27 18:21:42.939433 ignition[671]: Ignition 2.21.0 May 27 18:21:42.940958 ignition[671]: Stage: fetch-offline May 27 18:21:42.941057 ignition[671]: no configs at "/usr/lib/ignition/base.d" May 27 18:21:42.941076 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 27 18:21:42.944426 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 18:21:42.941238 ignition[671]: parsed url from cmdline: "" May 27 18:21:42.946578 systemd-resolved[253]: Detected conflict on linux IN A 172.24.4.229 May 27 18:21:42.941245 ignition[671]: no config URL provided May 27 18:21:42.946597 systemd-resolved[253]: Hostname conflict, changing published hostname from 'linux' to 'linux2'. May 27 18:21:42.941253 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" May 27 18:21:42.946876 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 27 18:21:42.941266 ignition[671]: no config at "/usr/lib/ignition/user.ign" May 27 18:21:42.941273 ignition[671]: failed to fetch config: resource requires networking May 27 18:21:42.941559 ignition[671]: Ignition finished successfully May 27 18:21:42.987987 ignition[780]: Ignition 2.21.0 May 27 18:21:42.988017 ignition[780]: Stage: fetch May 27 18:21:42.988314 ignition[780]: no configs at "/usr/lib/ignition/base.d" May 27 18:21:42.988338 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 27 18:21:42.988501 ignition[780]: parsed url from cmdline: "" May 27 18:21:42.988510 ignition[780]: no config URL provided May 27 18:21:42.988522 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" May 27 18:21:42.988539 ignition[780]: no config at "/usr/lib/ignition/user.ign" May 27 18:21:42.988763 ignition[780]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 27 18:21:42.988848 ignition[780]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 27 18:21:42.988891 ignition[780]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 27 18:21:43.276878 ignition[780]: GET result: OK May 27 18:21:43.277257 ignition[780]: parsing config with SHA512: b1df2d11037b0b9922b844e7af9f2d2c9235d21b3c192f7aee1b8929edbc46a14b10ab995591819dd6eaf400384976eb5a8b13623db65ab2dffb503de738b5bd May 27 18:21:43.288220 unknown[780]: fetched base config from "system" May 27 18:21:43.288250 unknown[780]: fetched base config from "system" May 27 18:21:43.289058 ignition[780]: fetch: fetch complete May 27 18:21:43.288264 unknown[780]: fetched user config from "openstack" May 27 18:21:43.289071 ignition[780]: fetch: fetch passed May 27 18:21:43.294812 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 27 18:21:43.289158 ignition[780]: Ignition finished successfully May 27 18:21:43.299169 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 18:21:43.346358 ignition[787]: Ignition 2.21.0 May 27 18:21:43.346388 ignition[787]: Stage: kargs May 27 18:21:43.346666 ignition[787]: no configs at "/usr/lib/ignition/base.d" May 27 18:21:43.346735 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 27 18:21:43.352826 ignition[787]: kargs: kargs passed May 27 18:21:43.352949 ignition[787]: Ignition finished successfully May 27 18:21:43.355565 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 18:21:43.359417 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 18:21:43.416375 ignition[794]: Ignition 2.21.0 May 27 18:21:43.416404 ignition[794]: Stage: disks May 27 18:21:43.416736 ignition[794]: no configs at "/usr/lib/ignition/base.d" May 27 18:21:43.416762 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 27 18:21:43.420591 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 18:21:43.418459 ignition[794]: disks: disks passed May 27 18:21:43.422560 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 18:21:43.418545 ignition[794]: Ignition finished successfully May 27 18:21:43.424751 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 18:21:43.427273 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 18:21:43.429379 systemd[1]: Reached target sysinit.target - System Initialization. May 27 18:21:43.432054 systemd[1]: Reached target basic.target - Basic System. May 27 18:21:43.436930 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 18:21:43.482364 systemd-fsck[802]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks May 27 18:21:43.495017 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 18:21:43.499289 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 18:21:43.704731 kernel: EXT4-fs (vda9): mounted filesystem bf93e767-f532-4480-b210-a196f7ac181e r/w with ordered data mode. Quota mode: none. May 27 18:21:43.705937 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 18:21:43.709302 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 18:21:43.723153 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 18:21:43.727311 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 18:21:43.731314 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 18:21:43.738950 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... May 27 18:21:43.743833 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 18:21:43.743900 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 18:21:43.752789 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 18:21:43.756516 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (810) May 27 18:21:43.771793 kernel: BTRFS info (device vda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 18:21:43.771893 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 18:21:43.771926 kernel: BTRFS info (device vda6): using free-space-tree May 27 18:21:43.773155 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 18:21:43.804393 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 18:21:43.906704 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 27 18:21:43.908119 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory May 27 18:21:43.912960 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory May 27 18:21:43.917937 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory May 27 18:21:43.923698 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory May 27 18:21:44.020900 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 18:21:44.022847 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 18:21:44.023996 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 18:21:44.034992 systemd-networkd[770]: eth0: Gained IPv6LL May 27 18:21:44.037673 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 18:21:44.040453 kernel: BTRFS info (device vda6): last unmount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 18:21:44.062955 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 18:21:44.067609 ignition[928]: INFO : Ignition 2.21.0 May 27 18:21:44.069219 ignition[928]: INFO : Stage: mount May 27 18:21:44.069219 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 18:21:44.069219 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 27 18:21:44.072116 ignition[928]: INFO : mount: mount passed May 27 18:21:44.072116 ignition[928]: INFO : Ignition finished successfully May 27 18:21:44.073183 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 18:21:44.946798 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 27 18:21:46.959920 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 27 18:21:50.975765 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 27 18:21:50.983077 coreos-metadata[812]: May 27 18:21:50.982 WARN failed to locate config-drive, using the metadata service API instead May 27 18:21:51.023905 coreos-metadata[812]: May 27 18:21:51.023 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 27 18:21:51.042644 coreos-metadata[812]: May 27 18:21:51.041 INFO Fetch successful May 27 18:21:51.044241 coreos-metadata[812]: May 27 18:21:51.044 INFO wrote hostname ci-4344-0-0-3-6dd1c807ec.novalocal to /sysroot/etc/hostname May 27 18:21:51.050103 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 27 18:21:51.050400 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. May 27 18:21:51.059016 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 18:21:51.095494 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 18:21:51.130789 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (945) May 27 18:21:51.138737 kernel: BTRFS info (device vda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 18:21:51.138823 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 18:21:51.142975 kernel: BTRFS info (device vda6): using free-space-tree May 27 18:21:51.157669 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 18:21:51.210194 ignition[963]: INFO : Ignition 2.21.0 May 27 18:21:51.210194 ignition[963]: INFO : Stage: files May 27 18:21:51.214032 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 18:21:51.214032 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 27 18:21:51.220081 ignition[963]: DEBUG : files: compiled without relabeling support, skipping May 27 18:21:51.220081 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 18:21:51.220081 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 18:21:51.226521 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 18:21:51.226521 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 18:21:51.230860 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 18:21:51.228440 unknown[963]: wrote ssh authorized keys file for user: core May 27 18:21:51.234606 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 27 18:21:51.234606 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 27 18:21:51.342529 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 18:21:51.689749 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 27 18:21:51.689749 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 27 18:21:51.692558 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 27 18:21:51.692558 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 18:21:51.692558 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 18:21:51.692558 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 18:21:51.692558 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 18:21:51.692558 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 18:21:51.692558 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 18:21:51.699003 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 18:21:51.699003 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 18:21:51.699003 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 27 18:21:51.702155 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 27 18:21:51.702155 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 27 18:21:51.702155 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 27 18:21:52.453893 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 27 18:21:54.103329 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 27 18:21:54.104930 ignition[963]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 27 18:21:54.106650 ignition[963]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 18:21:54.113974 ignition[963]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 18:21:54.113974 ignition[963]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 27 18:21:54.113974 ignition[963]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 27 18:21:54.122226 ignition[963]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 27 18:21:54.122226 ignition[963]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 18:21:54.122226 ignition[963]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 18:21:54.122226 ignition[963]: INFO : files: files passed May 27 18:21:54.122226 ignition[963]: INFO : Ignition finished successfully May 27 18:21:54.117045 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 18:21:54.122812 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 18:21:54.126391 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 18:21:54.139397 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 18:21:54.153048 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 18:21:54.153048 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 18:21:54.139493 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 18:21:54.156118 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 18:21:54.155832 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 18:21:54.159571 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 18:21:54.164483 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 18:21:54.217598 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 18:21:54.217898 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 18:21:54.221510 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 18:21:54.223006 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 18:21:54.226372 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 18:21:54.228303 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 18:21:54.272145 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 18:21:54.277254 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 18:21:54.321327 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 18:21:54.323085 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 18:21:54.326329 systemd[1]: Stopped target timers.target - Timer Units. May 27 18:21:54.329412 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 18:21:54.329867 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 18:21:54.332734 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 18:21:54.334537 systemd[1]: Stopped target basic.target - Basic System. May 27 18:21:54.337587 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 18:21:54.340251 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 18:21:54.342955 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 18:21:54.346004 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 18:21:54.348933 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 18:21:54.352004 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 18:21:54.354998 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 18:21:54.358066 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 18:21:54.361171 systemd[1]: Stopped target swap.target - Swaps. May 27 18:21:54.364089 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 18:21:54.364511 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 18:21:54.367318 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 18:21:54.369373 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 18:21:54.371972 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 18:21:54.372721 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 18:21:54.374931 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 18:21:54.375209 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 18:21:54.379283 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 18:21:54.379755 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 18:21:54.382879 systemd[1]: ignition-files.service: Deactivated successfully. May 27 18:21:54.383251 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 18:21:54.389037 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 18:21:54.394137 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 18:21:54.397415 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 18:21:54.397854 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 18:21:54.402074 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 18:21:54.402458 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 18:21:54.416031 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 18:21:54.419874 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 18:21:54.433936 ignition[1017]: INFO : Ignition 2.21.0 May 27 18:21:54.433936 ignition[1017]: INFO : Stage: umount May 27 18:21:54.436200 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 18:21:54.436200 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 27 18:21:54.440113 ignition[1017]: INFO : umount: umount passed May 27 18:21:54.441606 ignition[1017]: INFO : Ignition finished successfully May 27 18:21:54.443170 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 18:21:54.444365 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 18:21:54.444763 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 18:21:54.446281 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 18:21:54.446354 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 18:21:54.447120 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 18:21:54.447160 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 18:21:54.448082 systemd[1]: ignition-fetch.service: Deactivated successfully. May 27 18:21:54.448121 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 27 18:21:54.449113 systemd[1]: Stopped target network.target - Network. May 27 18:21:54.450155 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 18:21:54.450202 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 18:21:54.451292 systemd[1]: Stopped target paths.target - Path Units. May 27 18:21:54.452254 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 18:21:54.452503 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 18:21:54.453312 systemd[1]: Stopped target slices.target - Slice Units. May 27 18:21:54.454310 systemd[1]: Stopped target sockets.target - Socket Units. May 27 18:21:54.455352 systemd[1]: iscsid.socket: Deactivated successfully. May 27 18:21:54.455387 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 18:21:54.457159 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 18:21:54.457192 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 18:21:54.458385 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 18:21:54.458432 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 18:21:54.459385 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 18:21:54.459424 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 18:21:54.460927 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 18:21:54.462144 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 18:21:54.467742 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 18:21:54.467839 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 18:21:54.475567 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 18:21:54.476916 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 18:21:54.477012 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 18:21:54.479413 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 18:21:54.479884 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 18:21:54.481485 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 18:21:54.481522 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 18:21:54.482886 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 18:21:54.484982 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 18:21:54.485031 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 18:21:54.486225 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 18:21:54.486268 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 18:21:54.488637 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 18:21:54.488713 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 18:21:54.490640 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 18:21:54.490708 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 18:21:54.493388 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 18:21:54.495075 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 18:21:54.495131 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 18:21:54.512184 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 18:21:54.512349 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 18:21:54.513461 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 18:21:54.513518 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 18:21:54.515847 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 18:21:54.515877 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 18:21:54.517392 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 18:21:54.517436 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 18:21:54.519126 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 18:21:54.519173 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 18:21:54.520158 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 18:21:54.520205 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 18:21:54.522833 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 18:21:54.523865 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 18:21:54.523926 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 18:21:54.526788 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 18:21:54.526840 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 18:21:54.528543 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 27 18:21:54.528587 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 18:21:54.530174 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 18:21:54.530216 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 18:21:54.531042 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 18:21:54.531083 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 18:21:54.533984 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 27 18:21:54.534035 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 27 18:21:54.534072 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 27 18:21:54.534111 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 18:21:54.534399 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 18:21:54.534504 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 18:21:54.539451 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 18:21:54.539546 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 18:21:54.697429 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 18:21:54.697664 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 18:21:54.701073 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 18:21:54.703063 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 18:21:54.703191 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 18:21:54.709910 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 18:21:54.744164 systemd[1]: Switching root. May 27 18:21:54.816887 systemd-journald[214]: Journal stopped May 27 18:21:56.526450 systemd-journald[214]: Received SIGTERM from PID 1 (systemd). May 27 18:21:56.526501 kernel: SELinux: policy capability network_peer_controls=1 May 27 18:21:56.526522 kernel: SELinux: policy capability open_perms=1 May 27 18:21:56.526538 kernel: SELinux: policy capability extended_socket_class=1 May 27 18:21:56.526551 kernel: SELinux: policy capability always_check_network=0 May 27 18:21:56.526563 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 18:21:56.526575 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 18:21:56.526587 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 18:21:56.526598 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 18:21:56.526610 kernel: SELinux: policy capability userspace_initial_context=0 May 27 18:21:56.526624 kernel: audit: type=1403 audit(1748370115.243:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 18:21:56.526636 systemd[1]: Successfully loaded SELinux policy in 93.325ms. May 27 18:21:56.526657 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 24.686ms. May 27 18:21:56.526672 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 18:21:56.527065 systemd[1]: Detected virtualization kvm. May 27 18:21:56.527083 systemd[1]: Detected architecture x86-64. May 27 18:21:56.527096 systemd[1]: Detected first boot. May 27 18:21:56.527110 systemd[1]: Hostname set to . May 27 18:21:56.527127 systemd[1]: Initializing machine ID from VM UUID. May 27 18:21:56.527140 zram_generator::config[1061]: No configuration found. May 27 18:21:56.527154 kernel: Guest personality initialized and is inactive May 27 18:21:56.527166 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 27 18:21:56.527177 kernel: Initialized host personality May 27 18:21:56.527189 kernel: NET: Registered PF_VSOCK protocol family May 27 18:21:56.527201 systemd[1]: Populated /etc with preset unit settings. May 27 18:21:56.527215 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 18:21:56.527227 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 18:21:56.527243 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 18:21:56.527255 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 18:21:56.527268 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 18:21:56.527281 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 18:21:56.527294 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 18:21:56.527307 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 18:21:56.527324 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 18:21:56.527338 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 18:21:56.527356 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 18:21:56.527369 systemd[1]: Created slice user.slice - User and Session Slice. May 27 18:21:56.527382 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 18:21:56.527396 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 18:21:56.527409 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 18:21:56.527423 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 18:21:56.527438 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 18:21:56.527452 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 18:21:56.527465 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 18:21:56.527478 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 18:21:56.527491 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 18:21:56.527504 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 18:21:56.527517 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 18:21:56.527530 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 18:21:56.527544 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 18:21:56.527560 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 18:21:56.527574 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 18:21:56.527587 systemd[1]: Reached target slices.target - Slice Units. May 27 18:21:56.527599 systemd[1]: Reached target swap.target - Swaps. May 27 18:21:56.527612 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 18:21:56.527625 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 18:21:56.527638 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 18:21:56.527651 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 18:21:56.527664 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 18:21:56.527677 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 18:21:56.529575 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 18:21:56.529594 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 18:21:56.529608 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 18:21:56.529622 systemd[1]: Mounting media.mount - External Media Directory... May 27 18:21:56.529635 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:21:56.529648 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 18:21:56.529661 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 18:21:56.529674 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 18:21:56.529713 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 18:21:56.529727 systemd[1]: Reached target machines.target - Containers. May 27 18:21:56.529756 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 18:21:56.529770 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 18:21:56.529783 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 18:21:56.529795 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 18:21:56.529808 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 18:21:56.529821 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 18:21:56.529834 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 18:21:56.529849 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 18:21:56.529861 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 18:21:56.529874 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 18:21:56.529887 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 18:21:56.529899 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 18:21:56.529911 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 18:21:56.529924 systemd[1]: Stopped systemd-fsck-usr.service. May 27 18:21:56.529937 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 18:21:56.529952 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 18:21:56.529967 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 18:21:56.529980 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 18:21:56.529993 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 18:21:56.530005 kernel: loop: module loaded May 27 18:21:56.530018 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 18:21:56.530032 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 18:21:56.530045 systemd[1]: verity-setup.service: Deactivated successfully. May 27 18:21:56.530057 systemd[1]: Stopped verity-setup.service. May 27 18:21:56.530071 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:21:56.530084 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 18:21:56.530102 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 18:21:56.530115 systemd[1]: Mounted media.mount - External Media Directory. May 27 18:21:56.530127 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 18:21:56.530141 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 18:21:56.530154 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 18:21:56.530166 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 18:21:56.530179 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 18:21:56.530191 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 18:21:56.530206 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 18:21:56.530219 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 18:21:56.530231 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 18:21:56.530244 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 18:21:56.530257 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 18:21:56.530274 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 18:21:56.530287 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 18:21:56.530300 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 18:21:56.530336 systemd-journald[1148]: Collecting audit messages is disabled. May 27 18:21:56.530367 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 18:21:56.530381 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 18:21:56.530395 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 18:21:56.530409 systemd-journald[1148]: Journal started May 27 18:21:56.530436 systemd-journald[1148]: Runtime Journal (/run/log/journal/d688174c4769490fbcb704462e53bf67) is 8M, max 78.5M, 70.5M free. May 27 18:21:56.129451 systemd[1]: Queued start job for default target multi-user.target. May 27 18:21:56.135985 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 27 18:21:56.136447 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 18:21:56.543477 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 18:21:56.543546 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 18:21:56.543564 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 18:21:56.544811 kernel: fuse: init (API version 7.41) May 27 18:21:56.559737 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 18:21:56.562719 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 18:21:56.581713 kernel: ACPI: bus type drm_connector registered May 27 18:21:56.585358 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 18:21:56.592733 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 18:21:56.592810 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 18:21:56.599709 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 18:21:56.615707 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 18:21:56.626724 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 18:21:56.634349 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 18:21:56.634414 systemd[1]: Started systemd-journald.service - Journal Service. May 27 18:21:56.638701 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 18:21:56.639432 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 18:21:56.640108 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 18:21:56.641968 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 18:21:56.642118 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 18:21:56.642878 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 18:21:56.643673 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 18:21:56.647365 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 18:21:56.654801 kernel: loop0: detected capacity change from 0 to 8 May 27 18:21:56.660160 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 18:21:56.671485 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 18:21:56.673801 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 18:21:56.677867 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 18:21:56.683001 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 18:21:56.692710 kernel: loop1: detected capacity change from 0 to 221472 May 27 18:21:56.711055 systemd-journald[1148]: Time spent on flushing to /var/log/journal/d688174c4769490fbcb704462e53bf67 is 34.245ms for 982 entries. May 27 18:21:56.711055 systemd-journald[1148]: System Journal (/var/log/journal/d688174c4769490fbcb704462e53bf67) is 8M, max 584.8M, 576.8M free. May 27 18:21:56.807278 systemd-journald[1148]: Received client request to flush runtime journal. May 27 18:21:56.715107 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 18:21:56.747866 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. May 27 18:21:56.747880 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. May 27 18:21:56.754737 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 18:21:56.758427 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 18:21:56.806295 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 18:21:56.808628 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 18:21:56.881768 kernel: loop2: detected capacity change from 0 to 113872 May 27 18:21:56.885564 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 18:21:56.891968 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 18:21:56.940891 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. May 27 18:21:56.941181 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. May 27 18:21:56.947159 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 18:21:56.964710 kernel: loop3: detected capacity change from 0 to 146240 May 27 18:21:57.045714 kernel: loop4: detected capacity change from 0 to 8 May 27 18:21:57.048740 kernel: loop5: detected capacity change from 0 to 221472 May 27 18:21:57.138347 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 18:21:57.146461 kernel: loop6: detected capacity change from 0 to 113872 May 27 18:21:57.142036 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 18:21:57.157052 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 18:21:57.190744 kernel: loop7: detected capacity change from 0 to 146240 May 27 18:21:57.252549 (sd-merge)[1225]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. May 27 18:21:57.254550 (sd-merge)[1225]: Merged extensions into '/usr'. May 27 18:21:57.259300 systemd[1]: Reload requested from client PID 1179 ('systemd-sysext') (unit systemd-sysext.service)... May 27 18:21:57.259320 systemd[1]: Reloading... May 27 18:21:57.361867 zram_generator::config[1251]: No configuration found. May 27 18:21:57.522479 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 18:21:57.626781 systemd[1]: Reloading finished in 367 ms. May 27 18:21:57.646924 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 18:21:57.675528 systemd[1]: Starting ensure-sysext.service... May 27 18:21:57.682955 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 18:21:57.735393 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 18:21:57.735739 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 18:21:57.736123 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 18:21:57.736129 systemd[1]: Reload requested from client PID 1309 ('systemctl') (unit ensure-sysext.service)... May 27 18:21:57.736154 systemd[1]: Reloading... May 27 18:21:57.736516 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 18:21:57.737353 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 18:21:57.737719 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. May 27 18:21:57.737853 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. May 27 18:21:57.751373 systemd-tmpfiles[1310]: Detected autofs mount point /boot during canonicalization of boot. May 27 18:21:57.751385 systemd-tmpfiles[1310]: Skipping /boot May 27 18:21:57.772243 systemd-tmpfiles[1310]: Detected autofs mount point /boot during canonicalization of boot. May 27 18:21:57.773730 systemd-tmpfiles[1310]: Skipping /boot May 27 18:21:57.839713 zram_generator::config[1338]: No configuration found. May 27 18:21:58.040300 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 18:21:58.151286 systemd[1]: Reloading finished in 414 ms. May 27 18:21:58.166758 ldconfig[1175]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 18:21:58.165991 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 18:21:58.174831 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 18:21:58.183280 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 18:21:58.191913 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 18:21:58.194287 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 18:21:58.197725 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 18:21:58.202953 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 18:21:58.206882 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 18:21:58.211944 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 18:21:58.224499 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:21:58.224741 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 18:21:58.227945 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 18:21:58.232057 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 18:21:58.238005 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 18:21:58.239842 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 18:21:58.239981 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 18:21:58.243164 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 18:21:58.244742 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:21:58.247273 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:21:58.247910 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 18:21:58.248105 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 18:21:58.248212 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 18:21:58.248327 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:21:58.256924 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:21:58.257235 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 18:21:58.264908 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 18:21:58.266410 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 18:21:58.266632 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 18:21:58.267110 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:21:58.270524 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 18:21:58.277520 systemd[1]: Finished ensure-sysext.service. May 27 18:21:58.279102 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 18:21:58.280676 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 18:21:58.280916 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 18:21:58.282091 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 18:21:58.282256 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 18:21:58.290543 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 18:21:58.298047 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 27 18:21:58.302750 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 18:21:58.303811 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 18:21:58.304989 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 18:21:58.305938 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 18:21:58.306753 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 18:21:58.309328 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 18:21:58.344775 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 18:21:58.346528 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 18:21:58.351761 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 18:21:58.353088 systemd-udevd[1400]: Using default interface naming scheme 'v255'. May 27 18:21:58.355666 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 18:21:58.369831 augenrules[1442]: No rules May 27 18:21:58.372305 systemd[1]: audit-rules.service: Deactivated successfully. May 27 18:21:58.373006 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 18:21:58.447286 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 27 18:21:58.448013 systemd[1]: Reached target time-set.target - System Time Set. May 27 18:21:58.516911 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 18:21:58.527111 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 18:21:58.546924 systemd-resolved[1399]: Positive Trust Anchors: May 27 18:21:58.546950 systemd-resolved[1399]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 18:21:58.547056 systemd-resolved[1399]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 18:21:58.559760 systemd-resolved[1399]: Using system hostname 'ci-4344-0-0-3-6dd1c807ec.novalocal'. May 27 18:21:58.562780 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 18:21:58.565133 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 18:21:58.568528 systemd[1]: Reached target sysinit.target - System Initialization. May 27 18:21:58.570033 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 18:21:58.571452 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 18:21:58.572840 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 27 18:21:58.576191 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 18:21:58.577332 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 18:21:58.578380 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 18:21:58.579041 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 18:21:58.579077 systemd[1]: Reached target paths.target - Path Units. May 27 18:21:58.579532 systemd[1]: Reached target timers.target - Timer Units. May 27 18:21:58.582870 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 18:21:58.586069 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 18:21:58.589627 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 18:21:58.590450 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 18:21:58.591980 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 18:21:58.600942 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 18:21:58.602544 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 18:21:58.609160 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 18:21:58.617516 systemd[1]: Reached target sockets.target - Socket Units. May 27 18:21:58.618172 systemd[1]: Reached target basic.target - Basic System. May 27 18:21:58.619331 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 18:21:58.619366 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 18:21:58.621100 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 27 18:21:58.623789 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 18:21:58.628924 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 18:21:58.634941 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 18:21:58.642982 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 18:21:58.643555 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 18:21:58.649315 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 27 18:21:58.655770 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 18:21:58.659900 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 18:21:58.666729 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 27 18:21:58.672184 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 18:21:58.678215 jq[1487]: false May 27 18:21:58.678781 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 18:21:58.689733 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 18:21:58.692457 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 18:21:58.694488 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 18:21:58.700980 systemd[1]: Starting update-engine.service - Update Engine... May 27 18:21:58.704910 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 18:21:58.707754 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 18:21:58.708669 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 18:21:58.708879 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 18:21:58.727082 oslogin_cache_refresh[1489]: Refreshing passwd entry cache May 27 18:21:58.724490 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 18:21:58.729128 google_oslogin_nss_cache[1489]: oslogin_cache_refresh[1489]: Refreshing passwd entry cache May 27 18:21:58.726779 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 18:21:58.739763 jq[1504]: true May 27 18:21:58.745720 google_oslogin_nss_cache[1489]: oslogin_cache_refresh[1489]: Failure getting users, quitting May 27 18:21:58.745720 google_oslogin_nss_cache[1489]: oslogin_cache_refresh[1489]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 18:21:58.745720 google_oslogin_nss_cache[1489]: oslogin_cache_refresh[1489]: Refreshing group entry cache May 27 18:21:58.745553 oslogin_cache_refresh[1489]: Failure getting users, quitting May 27 18:21:58.745572 oslogin_cache_refresh[1489]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 18:21:58.745622 oslogin_cache_refresh[1489]: Refreshing group entry cache May 27 18:21:58.748183 oslogin_cache_refresh[1489]: Failure getting groups, quitting May 27 18:21:58.748870 google_oslogin_nss_cache[1489]: oslogin_cache_refresh[1489]: Failure getting groups, quitting May 27 18:21:58.748870 google_oslogin_nss_cache[1489]: oslogin_cache_refresh[1489]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 18:21:58.748191 oslogin_cache_refresh[1489]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 18:21:58.751657 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 27 18:21:58.751972 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 27 18:21:58.759031 update_engine[1501]: I20250527 18:21:58.758965 1501 main.cc:92] Flatcar Update Engine starting May 27 18:21:58.776058 jq[1514]: true May 27 18:21:58.784996 systemd[1]: motdgen.service: Deactivated successfully. May 27 18:21:58.785252 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 18:21:58.786740 extend-filesystems[1488]: Found loop4 May 27 18:21:58.786740 extend-filesystems[1488]: Found loop5 May 27 18:21:58.786740 extend-filesystems[1488]: Found loop6 May 27 18:21:58.786740 extend-filesystems[1488]: Found loop7 May 27 18:21:58.786740 extend-filesystems[1488]: Found vda May 27 18:21:58.786740 extend-filesystems[1488]: Found vda1 May 27 18:21:58.786740 extend-filesystems[1488]: Found vda2 May 27 18:21:58.786740 extend-filesystems[1488]: Found vda3 May 27 18:21:58.786740 extend-filesystems[1488]: Found usr May 27 18:21:58.786740 extend-filesystems[1488]: Found vda4 May 27 18:21:58.797665 extend-filesystems[1488]: Found vda6 May 27 18:21:58.797665 extend-filesystems[1488]: Found vda7 May 27 18:21:58.797665 extend-filesystems[1488]: Found vda9 May 27 18:21:58.788662 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 18:21:58.801381 tar[1505]: linux-amd64/helm May 27 18:21:58.789047 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 18:21:58.802444 dbus-daemon[1482]: [system] SELinux support is enabled May 27 18:21:58.802815 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 18:21:58.810259 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 18:21:58.811259 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 18:21:58.811289 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 18:21:58.812854 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 18:21:58.812878 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 18:21:58.817005 systemd[1]: Started update-engine.service - Update Engine. May 27 18:21:58.817291 update_engine[1501]: I20250527 18:21:58.817232 1501 update_check_scheduler.cc:74] Next update check in 2m33s May 27 18:21:58.825585 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 18:21:58.922169 sshd_keygen[1519]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 18:21:58.963907 bash[1538]: Updated "/home/core/.ssh/authorized_keys" May 27 18:21:58.965415 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 18:21:58.970012 systemd[1]: Starting sshkeys.service... May 27 18:21:58.987722 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 27 18:21:59.001461 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 27 18:21:59.005007 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 27 18:21:59.025205 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 18:21:59.048527 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 18:21:59.063863 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 27 18:21:59.073216 systemd[1]: issuegen.service: Deactivated successfully. May 27 18:21:59.073588 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 18:21:59.082132 kernel: ACPI: button: Power Button [PWRF] May 27 18:21:59.094517 kernel: mousedev: PS/2 mouse device common for all mice May 27 18:21:59.100167 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 27 18:21:59.100412 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 27 18:21:59.106170 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 18:21:59.110399 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 18:21:59.135252 locksmithd[1524]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 18:21:59.150912 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 27 18:21:59.161723 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 27 18:21:59.165220 kernel: Console: switching to colour dummy device 80x25 May 27 18:21:59.166734 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 27 18:21:59.166762 kernel: [drm] features: -context_init May 27 18:21:59.171879 kernel: [drm] number of scanouts: 1 May 27 18:21:59.173708 kernel: [drm] number of cap sets: 0 May 27 18:21:59.175726 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 May 27 18:21:59.188099 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 18:21:59.195388 systemd-logind[1498]: New seat seat0. May 27 18:21:59.196369 systemd[1]: Started systemd-logind.service - User Login Management. May 27 18:21:59.245963 systemd-networkd[1457]: lo: Link UP May 27 18:21:59.246675 systemd-networkd[1457]: lo: Gained carrier May 27 18:21:59.249964 systemd-networkd[1457]: Enumeration completed May 27 18:21:59.250092 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 18:21:59.250519 systemd-networkd[1457]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 18:21:59.250527 systemd-networkd[1457]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 18:21:59.251284 systemd[1]: Reached target network.target - Network. May 27 18:21:59.251640 systemd-networkd[1457]: eth0: Link UP May 27 18:21:59.252045 systemd-networkd[1457]: eth0: Gained carrier May 27 18:21:59.252114 systemd-networkd[1457]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 18:21:59.255907 systemd[1]: Starting containerd.service - containerd container runtime... May 27 18:21:59.258858 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 18:21:59.261418 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 18:21:59.264746 systemd-networkd[1457]: eth0: DHCPv4 address 172.24.4.229/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 27 18:21:59.265839 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 18:21:59.267279 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. May 27 18:21:59.317818 (ntainerd)[1585]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 18:21:59.336634 systemd-logind[1498]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 18:21:59.367072 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 18:21:59.372285 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 18:21:59.375394 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 18:21:59.375918 systemd[1]: Reached target getty.target - Login Prompts. May 27 18:21:59.378271 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 18:21:59.398076 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 18:21:59.447807 systemd-logind[1498]: Watching system buttons on /dev/input/event2 (Power Button) May 27 18:21:59.461940 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 18:21:59.462308 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 18:21:59.466042 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 18:21:59.476293 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 18:21:59.654995 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 18:21:59.657437 systemd[1]: Started sshd@0-172.24.4.229:22-172.24.4.1:52472.service - OpenSSH per-connection server daemon (172.24.4.1:52472). May 27 18:21:59.710574 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 18:21:59.746883 containerd[1585]: time="2025-05-27T18:21:59Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 18:21:59.749709 containerd[1585]: time="2025-05-27T18:21:59.749589790Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 18:21:59.763737 containerd[1585]: time="2025-05-27T18:21:59.762961138Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.182µs" May 27 18:21:59.763737 containerd[1585]: time="2025-05-27T18:21:59.763011623Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 18:21:59.763737 containerd[1585]: time="2025-05-27T18:21:59.763038573Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 18:21:59.763737 containerd[1585]: time="2025-05-27T18:21:59.763293862Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 18:21:59.763737 containerd[1585]: time="2025-05-27T18:21:59.763322085Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 18:21:59.763737 containerd[1585]: time="2025-05-27T18:21:59.763358594Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 18:21:59.763737 containerd[1585]: time="2025-05-27T18:21:59.763439806Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 18:21:59.763737 containerd[1585]: time="2025-05-27T18:21:59.763460926Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 18:21:59.764203 containerd[1585]: time="2025-05-27T18:21:59.764177520Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 18:21:59.764277 containerd[1585]: time="2025-05-27T18:21:59.764259904Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 18:21:59.764348 containerd[1585]: time="2025-05-27T18:21:59.764331047Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 18:21:59.764413 containerd[1585]: time="2025-05-27T18:21:59.764397943Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 18:21:59.764554 containerd[1585]: time="2025-05-27T18:21:59.764536152Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 18:21:59.764839 containerd[1585]: time="2025-05-27T18:21:59.764820435Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 18:21:59.764919 containerd[1585]: time="2025-05-27T18:21:59.764901908Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 18:21:59.764971 containerd[1585]: time="2025-05-27T18:21:59.764958925Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 18:21:59.765052 containerd[1585]: time="2025-05-27T18:21:59.765037442Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 18:21:59.765544 containerd[1585]: time="2025-05-27T18:21:59.765513445Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 18:21:59.765650 containerd[1585]: time="2025-05-27T18:21:59.765633741Z" level=info msg="metadata content store policy set" policy=shared May 27 18:21:59.777215 containerd[1585]: time="2025-05-27T18:21:59.775540511Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 18:21:59.777215 containerd[1585]: time="2025-05-27T18:21:59.775579985Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 18:21:59.777215 containerd[1585]: time="2025-05-27T18:21:59.775596846Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 18:21:59.777215 containerd[1585]: time="2025-05-27T18:21:59.775609460Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 18:21:59.777215 containerd[1585]: time="2025-05-27T18:21:59.775620892Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 18:21:59.777215 containerd[1585]: time="2025-05-27T18:21:59.775631602Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 18:21:59.777215 containerd[1585]: time="2025-05-27T18:21:59.775643955Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 18:21:59.777215 containerd[1585]: time="2025-05-27T18:21:59.775656739Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 18:21:59.777215 containerd[1585]: time="2025-05-27T18:21:59.775672037Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 18:21:59.777215 containerd[1585]: time="2025-05-27T18:21:59.775700341Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 18:21:59.777215 containerd[1585]: time="2025-05-27T18:21:59.775711932Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 18:21:59.777215 containerd[1585]: time="2025-05-27T18:21:59.775724636Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 18:21:59.777215 containerd[1585]: time="2025-05-27T18:21:59.775836566Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 18:21:59.777215 containerd[1585]: time="2025-05-27T18:21:59.775861573Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 18:21:59.777745 containerd[1585]: time="2025-05-27T18:21:59.775883775Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 18:21:59.777745 containerd[1585]: time="2025-05-27T18:21:59.775896458Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 18:21:59.777745 containerd[1585]: time="2025-05-27T18:21:59.775912869Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 18:21:59.777745 containerd[1585]: time="2025-05-27T18:21:59.775924030Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 18:21:59.777745 containerd[1585]: time="2025-05-27T18:21:59.775934670Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 18:21:59.777745 containerd[1585]: time="2025-05-27T18:21:59.775945150Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 18:21:59.777745 containerd[1585]: time="2025-05-27T18:21:59.775956160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 18:21:59.777745 containerd[1585]: time="2025-05-27T18:21:59.775967051Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 18:21:59.777745 containerd[1585]: time="2025-05-27T18:21:59.775982810Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 18:21:59.777745 containerd[1585]: time="2025-05-27T18:21:59.776038675Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 18:21:59.777745 containerd[1585]: time="2025-05-27T18:21:59.776052691Z" level=info msg="Start snapshots syncer" May 27 18:21:59.777745 containerd[1585]: time="2025-05-27T18:21:59.776071497Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 18:21:59.778133 containerd[1585]: time="2025-05-27T18:21:59.776679066Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 18:21:59.778133 containerd[1585]: time="2025-05-27T18:21:59.776760399Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 18:21:59.778317 containerd[1585]: time="2025-05-27T18:21:59.776848845Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 18:21:59.778317 containerd[1585]: time="2025-05-27T18:21:59.776939324Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 18:21:59.778317 containerd[1585]: time="2025-05-27T18:21:59.776962157Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 18:21:59.778317 containerd[1585]: time="2025-05-27T18:21:59.776980932Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 18:21:59.778317 containerd[1585]: time="2025-05-27T18:21:59.776993496Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 18:21:59.778317 containerd[1585]: time="2025-05-27T18:21:59.777005388Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 18:21:59.778317 containerd[1585]: time="2025-05-27T18:21:59.777016810Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 18:21:59.778317 containerd[1585]: time="2025-05-27T18:21:59.777027079Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 18:21:59.778317 containerd[1585]: time="2025-05-27T18:21:59.777054801Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 18:21:59.778317 containerd[1585]: time="2025-05-27T18:21:59.777067775Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 18:21:59.778317 containerd[1585]: time="2025-05-27T18:21:59.777082503Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 18:21:59.779462 containerd[1585]: time="2025-05-27T18:21:59.779439714Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 18:21:59.779604 containerd[1585]: time="2025-05-27T18:21:59.779584485Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 18:21:59.779696 containerd[1585]: time="2025-05-27T18:21:59.779666539Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 18:21:59.779767 containerd[1585]: time="2025-05-27T18:21:59.779751068Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 18:21:59.779912 containerd[1585]: time="2025-05-27T18:21:59.779832380Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 18:21:59.779912 containerd[1585]: time="2025-05-27T18:21:59.779849633Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 18:21:59.779912 containerd[1585]: time="2025-05-27T18:21:59.779861134Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 18:21:59.779912 containerd[1585]: time="2025-05-27T18:21:59.779877345Z" level=info msg="runtime interface created" May 27 18:21:59.779912 containerd[1585]: time="2025-05-27T18:21:59.779882825Z" level=info msg="created NRI interface" May 27 18:21:59.780704 containerd[1585]: time="2025-05-27T18:21:59.779890419Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 18:21:59.780704 containerd[1585]: time="2025-05-27T18:21:59.780069375Z" level=info msg="Connect containerd service" May 27 18:21:59.780704 containerd[1585]: time="2025-05-27T18:21:59.780124648Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 18:21:59.780922 containerd[1585]: time="2025-05-27T18:21:59.780901525Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 18:21:59.811811 tar[1505]: linux-amd64/LICENSE May 27 18:21:59.811811 tar[1505]: linux-amd64/README.md May 27 18:21:59.825991 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 18:22:00.034766 containerd[1585]: time="2025-05-27T18:22:00.033768377Z" level=info msg="Start subscribing containerd event" May 27 18:22:00.034766 containerd[1585]: time="2025-05-27T18:22:00.033854749Z" level=info msg="Start recovering state" May 27 18:22:00.034766 containerd[1585]: time="2025-05-27T18:22:00.034021251Z" level=info msg="Start event monitor" May 27 18:22:00.034766 containerd[1585]: time="2025-05-27T18:22:00.034050306Z" level=info msg="Start cni network conf syncer for default" May 27 18:22:00.034766 containerd[1585]: time="2025-05-27T18:22:00.034067588Z" level=info msg="Start streaming server" May 27 18:22:00.034766 containerd[1585]: time="2025-05-27T18:22:00.034085953Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 18:22:00.034766 containerd[1585]: time="2025-05-27T18:22:00.034102794Z" level=info msg="runtime interface starting up..." May 27 18:22:00.034766 containerd[1585]: time="2025-05-27T18:22:00.034116981Z" level=info msg="starting plugins..." May 27 18:22:00.034766 containerd[1585]: time="2025-05-27T18:22:00.034140966Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 18:22:00.035075 containerd[1585]: time="2025-05-27T18:22:00.034935346Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 18:22:00.038467 containerd[1585]: time="2025-05-27T18:22:00.035240047Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 18:22:00.038467 containerd[1585]: time="2025-05-27T18:22:00.038308793Z" level=info msg="containerd successfully booted in 0.291872s" May 27 18:22:00.037930 systemd[1]: Started containerd.service - containerd container runtime. May 27 18:22:00.483057 systemd-networkd[1457]: eth0: Gained IPv6LL May 27 18:22:00.484796 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. May 27 18:22:00.487052 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 18:22:00.490734 systemd[1]: Reached target network-online.target - Network is Online. May 27 18:22:00.495369 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:22:00.500286 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 18:22:00.553647 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 18:22:00.636746 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 27 18:22:00.651833 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 27 18:22:00.823413 sshd[1608]: Accepted publickey for core from 172.24.4.1 port 52472 ssh2: RSA SHA256:D6TRbEcVa7iPKsFjaVce7/kj7tFlICZCjA4HGLeEPTo May 27 18:22:00.828560 sshd-session[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:22:00.846900 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 18:22:00.852332 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 18:22:00.877996 systemd-logind[1498]: New session 1 of user core. May 27 18:22:00.890239 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 18:22:00.898028 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 18:22:00.915326 (systemd)[1647]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 18:22:00.918455 systemd-logind[1498]: New session c1 of user core. May 27 18:22:01.086532 systemd[1647]: Queued start job for default target default.target. May 27 18:22:01.092587 systemd[1647]: Created slice app.slice - User Application Slice. May 27 18:22:01.092614 systemd[1647]: Reached target paths.target - Paths. May 27 18:22:01.092653 systemd[1647]: Reached target timers.target - Timers. May 27 18:22:01.094819 systemd[1647]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 18:22:01.105930 systemd[1647]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 18:22:01.107537 systemd[1647]: Reached target sockets.target - Sockets. May 27 18:22:01.107593 systemd[1647]: Reached target basic.target - Basic System. May 27 18:22:01.107644 systemd[1647]: Reached target default.target - Main User Target. May 27 18:22:01.107674 systemd[1647]: Startup finished in 181ms. May 27 18:22:01.107703 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 18:22:01.116096 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 18:22:01.445171 systemd[1]: Started sshd@1-172.24.4.229:22-172.24.4.1:52488.service - OpenSSH per-connection server daemon (172.24.4.1:52488). May 27 18:22:02.403307 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:22:02.427288 (kubelet)[1665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 18:22:02.659759 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 27 18:22:02.673364 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 27 18:22:02.952341 sshd[1658]: Accepted publickey for core from 172.24.4.1 port 52488 ssh2: RSA SHA256:D6TRbEcVa7iPKsFjaVce7/kj7tFlICZCjA4HGLeEPTo May 27 18:22:02.956102 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:22:02.968924 systemd-logind[1498]: New session 2 of user core. May 27 18:22:02.977463 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 18:22:03.554116 sshd[1672]: Connection closed by 172.24.4.1 port 52488 May 27 18:22:03.555022 sshd-session[1658]: pam_unix(sshd:session): session closed for user core May 27 18:22:03.570591 systemd[1]: sshd@1-172.24.4.229:22-172.24.4.1:52488.service: Deactivated successfully. May 27 18:22:03.573424 systemd[1]: session-2.scope: Deactivated successfully. May 27 18:22:03.578568 systemd-logind[1498]: Session 2 logged out. Waiting for processes to exit. May 27 18:22:03.583249 systemd[1]: Started sshd@2-172.24.4.229:22-172.24.4.1:35768.service - OpenSSH per-connection server daemon (172.24.4.1:35768). May 27 18:22:03.588717 systemd-logind[1498]: Removed session 2. May 27 18:22:03.702839 kubelet[1665]: E0527 18:22:03.702758 1665 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 18:22:03.708158 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 18:22:03.708480 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 18:22:03.709251 systemd[1]: kubelet.service: Consumed 2.086s CPU time, 265.2M memory peak. May 27 18:22:04.464397 login[1591]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying May 27 18:22:04.466136 login[1590]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 27 18:22:04.477098 systemd-logind[1498]: New session 4 of user core. May 27 18:22:04.487276 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 18:22:04.981510 sshd[1679]: Accepted publickey for core from 172.24.4.1 port 35768 ssh2: RSA SHA256:D6TRbEcVa7iPKsFjaVce7/kj7tFlICZCjA4HGLeEPTo May 27 18:22:04.984391 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:22:04.996949 systemd-logind[1498]: New session 5 of user core. May 27 18:22:05.005104 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 18:22:05.469439 login[1591]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 27 18:22:05.481788 systemd-logind[1498]: New session 3 of user core. May 27 18:22:05.494176 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 18:22:05.740629 sshd[1698]: Connection closed by 172.24.4.1 port 35768 May 27 18:22:05.740956 sshd-session[1679]: pam_unix(sshd:session): session closed for user core May 27 18:22:05.747999 systemd[1]: sshd@2-172.24.4.229:22-172.24.4.1:35768.service: Deactivated successfully. May 27 18:22:05.752607 systemd[1]: session-5.scope: Deactivated successfully. May 27 18:22:05.755503 systemd-logind[1498]: Session 5 logged out. Waiting for processes to exit. May 27 18:22:05.758783 systemd-logind[1498]: Removed session 5. May 27 18:22:06.690754 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 27 18:22:06.699827 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 27 18:22:06.708843 coreos-metadata[1480]: May 27 18:22:06.708 WARN failed to locate config-drive, using the metadata service API instead May 27 18:22:06.716616 coreos-metadata[1553]: May 27 18:22:06.716 WARN failed to locate config-drive, using the metadata service API instead May 27 18:22:06.758646 coreos-metadata[1480]: May 27 18:22:06.758 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 May 27 18:22:06.762334 coreos-metadata[1553]: May 27 18:22:06.762 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 27 18:22:07.014296 coreos-metadata[1553]: May 27 18:22:07.014 INFO Fetch successful May 27 18:22:07.014500 coreos-metadata[1553]: May 27 18:22:07.014 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 27 18:22:07.029219 coreos-metadata[1553]: May 27 18:22:07.029 INFO Fetch successful May 27 18:22:07.035751 unknown[1553]: wrote ssh authorized keys file for user: core May 27 18:22:07.042616 coreos-metadata[1480]: May 27 18:22:07.042 INFO Fetch successful May 27 18:22:07.042616 coreos-metadata[1480]: May 27 18:22:07.042 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 27 18:22:07.057567 coreos-metadata[1480]: May 27 18:22:07.057 INFO Fetch successful May 27 18:22:07.057567 coreos-metadata[1480]: May 27 18:22:07.057 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 May 27 18:22:07.070386 coreos-metadata[1480]: May 27 18:22:07.070 INFO Fetch successful May 27 18:22:07.070386 coreos-metadata[1480]: May 27 18:22:07.070 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 May 27 18:22:07.083094 coreos-metadata[1480]: May 27 18:22:07.083 INFO Fetch successful May 27 18:22:07.083094 coreos-metadata[1480]: May 27 18:22:07.083 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 May 27 18:22:07.085381 update-ssh-keys[1715]: Updated "/home/core/.ssh/authorized_keys" May 27 18:22:07.086218 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 27 18:22:07.089962 systemd[1]: Finished sshkeys.service. May 27 18:22:07.094301 coreos-metadata[1480]: May 27 18:22:07.094 INFO Fetch successful May 27 18:22:07.094301 coreos-metadata[1480]: May 27 18:22:07.094 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 May 27 18:22:07.107992 coreos-metadata[1480]: May 27 18:22:07.107 INFO Fetch successful May 27 18:22:07.156406 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 27 18:22:07.158489 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 18:22:07.159240 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 18:22:07.159974 systemd[1]: Startup finished in 3.814s (kernel) + 15.470s (initrd) + 12.006s (userspace) = 31.291s. May 27 18:22:13.728426 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 18:22:13.736005 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:22:14.239818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:22:14.253208 (kubelet)[1731]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 18:22:14.336926 kubelet[1731]: E0527 18:22:14.336617 1731 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 18:22:14.341887 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 18:22:14.342434 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 18:22:14.343797 systemd[1]: kubelet.service: Consumed 447ms CPU time, 108.2M memory peak. May 27 18:22:15.772562 systemd[1]: Started sshd@3-172.24.4.229:22-172.24.4.1:39948.service - OpenSSH per-connection server daemon (172.24.4.1:39948). May 27 18:22:17.333304 sshd[1740]: Accepted publickey for core from 172.24.4.1 port 39948 ssh2: RSA SHA256:D6TRbEcVa7iPKsFjaVce7/kj7tFlICZCjA4HGLeEPTo May 27 18:22:17.340410 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:22:17.365905 systemd-logind[1498]: New session 6 of user core. May 27 18:22:17.381327 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 18:22:18.087465 sshd[1742]: Connection closed by 172.24.4.1 port 39948 May 27 18:22:18.111601 sshd-session[1740]: pam_unix(sshd:session): session closed for user core May 27 18:22:18.121920 systemd[1]: sshd@3-172.24.4.229:22-172.24.4.1:39948.service: Deactivated successfully. May 27 18:22:18.124166 systemd[1]: session-6.scope: Deactivated successfully. May 27 18:22:18.129335 systemd-logind[1498]: Session 6 logged out. Waiting for processes to exit. May 27 18:22:18.134099 systemd[1]: Started sshd@4-172.24.4.229:22-172.24.4.1:39952.service - OpenSSH per-connection server daemon (172.24.4.1:39952). May 27 18:22:18.139322 systemd-logind[1498]: Removed session 6. May 27 18:22:19.523355 sshd[1748]: Accepted publickey for core from 172.24.4.1 port 39952 ssh2: RSA SHA256:D6TRbEcVa7iPKsFjaVce7/kj7tFlICZCjA4HGLeEPTo May 27 18:22:19.531397 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:22:19.547815 systemd-logind[1498]: New session 7 of user core. May 27 18:22:19.573458 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 18:22:20.309959 sshd[1750]: Connection closed by 172.24.4.1 port 39952 May 27 18:22:20.312430 sshd-session[1748]: pam_unix(sshd:session): session closed for user core May 27 18:22:20.342361 systemd[1]: sshd@4-172.24.4.229:22-172.24.4.1:39952.service: Deactivated successfully. May 27 18:22:20.347118 systemd[1]: session-7.scope: Deactivated successfully. May 27 18:22:20.349596 systemd-logind[1498]: Session 7 logged out. Waiting for processes to exit. May 27 18:22:20.361490 systemd[1]: Started sshd@5-172.24.4.229:22-172.24.4.1:39962.service - OpenSSH per-connection server daemon (172.24.4.1:39962). May 27 18:22:20.366124 systemd-logind[1498]: Removed session 7. May 27 18:22:21.838670 sshd[1756]: Accepted publickey for core from 172.24.4.1 port 39962 ssh2: RSA SHA256:D6TRbEcVa7iPKsFjaVce7/kj7tFlICZCjA4HGLeEPTo May 27 18:22:21.842280 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:22:21.856803 systemd-logind[1498]: New session 8 of user core. May 27 18:22:21.868055 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 18:22:22.471628 sshd[1758]: Connection closed by 172.24.4.1 port 39962 May 27 18:22:22.473066 sshd-session[1756]: pam_unix(sshd:session): session closed for user core May 27 18:22:22.488158 systemd[1]: sshd@5-172.24.4.229:22-172.24.4.1:39962.service: Deactivated successfully. May 27 18:22:22.491932 systemd[1]: session-8.scope: Deactivated successfully. May 27 18:22:22.495633 systemd-logind[1498]: Session 8 logged out. Waiting for processes to exit. May 27 18:22:22.499994 systemd[1]: Started sshd@6-172.24.4.229:22-172.24.4.1:39970.service - OpenSSH per-connection server daemon (172.24.4.1:39970). May 27 18:22:22.503417 systemd-logind[1498]: Removed session 8. May 27 18:22:24.062585 sshd[1764]: Accepted publickey for core from 172.24.4.1 port 39970 ssh2: RSA SHA256:D6TRbEcVa7iPKsFjaVce7/kj7tFlICZCjA4HGLeEPTo May 27 18:22:24.065430 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:22:24.078794 systemd-logind[1498]: New session 9 of user core. May 27 18:22:24.087041 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 18:22:24.475462 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 27 18:22:24.479581 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:22:24.573990 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 18:22:24.574921 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 18:22:24.602376 sudo[1770]: pam_unix(sudo:session): session closed for user root May 27 18:22:24.855547 sshd[1766]: Connection closed by 172.24.4.1 port 39970 May 27 18:22:24.855085 sshd-session[1764]: pam_unix(sshd:session): session closed for user core May 27 18:22:24.872447 systemd[1]: sshd@6-172.24.4.229:22-172.24.4.1:39970.service: Deactivated successfully. May 27 18:22:24.879753 systemd[1]: session-9.scope: Deactivated successfully. May 27 18:22:24.887754 systemd-logind[1498]: Session 9 logged out. Waiting for processes to exit. May 27 18:22:24.891524 systemd[1]: Started sshd@7-172.24.4.229:22-172.24.4.1:57124.service - OpenSSH per-connection server daemon (172.24.4.1:57124). May 27 18:22:24.899235 systemd-logind[1498]: Removed session 9. May 27 18:22:25.017785 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:22:25.028113 (kubelet)[1783]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 18:22:25.108353 kubelet[1783]: E0527 18:22:25.107780 1783 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 18:22:25.112348 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 18:22:25.112626 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 18:22:25.113677 systemd[1]: kubelet.service: Consumed 555ms CPU time, 110M memory peak. May 27 18:22:26.061150 sshd[1776]: Accepted publickey for core from 172.24.4.1 port 57124 ssh2: RSA SHA256:D6TRbEcVa7iPKsFjaVce7/kj7tFlICZCjA4HGLeEPTo May 27 18:22:26.064307 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:22:26.077806 systemd-logind[1498]: New session 10 of user core. May 27 18:22:26.086059 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 18:22:26.530180 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 18:22:26.530910 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 18:22:26.544230 sudo[1792]: pam_unix(sudo:session): session closed for user root May 27 18:22:26.558265 sudo[1791]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 18:22:26.559859 sudo[1791]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 18:22:26.612496 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 18:22:26.745404 augenrules[1814]: No rules May 27 18:22:26.746061 systemd[1]: audit-rules.service: Deactivated successfully. May 27 18:22:26.746286 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 18:22:26.748326 sudo[1791]: pam_unix(sudo:session): session closed for user root May 27 18:22:26.930064 sshd[1790]: Connection closed by 172.24.4.1 port 57124 May 27 18:22:26.932605 sshd-session[1776]: pam_unix(sshd:session): session closed for user core May 27 18:22:26.947968 systemd[1]: sshd@7-172.24.4.229:22-172.24.4.1:57124.service: Deactivated successfully. May 27 18:22:26.952258 systemd[1]: session-10.scope: Deactivated successfully. May 27 18:22:26.954478 systemd-logind[1498]: Session 10 logged out. Waiting for processes to exit. May 27 18:22:26.959609 systemd[1]: Started sshd@8-172.24.4.229:22-172.24.4.1:57134.service - OpenSSH per-connection server daemon (172.24.4.1:57134). May 27 18:22:26.963898 systemd-logind[1498]: Removed session 10. May 27 18:22:28.248479 sshd[1823]: Accepted publickey for core from 172.24.4.1 port 57134 ssh2: RSA SHA256:D6TRbEcVa7iPKsFjaVce7/kj7tFlICZCjA4HGLeEPTo May 27 18:22:28.251528 sshd-session[1823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:22:28.264811 systemd-logind[1498]: New session 11 of user core. May 27 18:22:28.276057 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 18:22:28.609569 sudo[1826]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 18:22:28.611426 sudo[1826]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 18:22:29.414366 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 18:22:29.432286 (dockerd)[1844]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 18:22:30.000664 dockerd[1844]: time="2025-05-27T18:22:30.000574104Z" level=info msg="Starting up" May 27 18:22:30.001898 dockerd[1844]: time="2025-05-27T18:22:30.001608894Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 18:22:30.106437 dockerd[1844]: time="2025-05-27T18:22:30.106361548Z" level=info msg="Loading containers: start." May 27 18:22:30.129776 kernel: Initializing XFRM netlink socket May 27 18:22:30.467542 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. May 27 18:22:30.543455 systemd-networkd[1457]: docker0: Link UP May 27 18:22:30.558332 dockerd[1844]: time="2025-05-27T18:22:30.558258390Z" level=info msg="Loading containers: done." May 27 18:22:30.577799 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2730313271-merged.mount: Deactivated successfully. May 27 18:22:30.586309 dockerd[1844]: time="2025-05-27T18:22:30.586248446Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 18:22:30.586435 dockerd[1844]: time="2025-05-27T18:22:30.586406843Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 18:22:30.586674 dockerd[1844]: time="2025-05-27T18:22:30.586631614Z" level=info msg="Initializing buildkit" May 27 18:22:30.636871 dockerd[1844]: time="2025-05-27T18:22:30.636761473Z" level=info msg="Completed buildkit initialization" May 27 18:22:30.652620 dockerd[1844]: time="2025-05-27T18:22:30.651946542Z" level=info msg="Daemon has completed initialization" May 27 18:22:30.652260 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 18:22:30.653803 dockerd[1844]: time="2025-05-27T18:22:30.652839407Z" level=info msg="API listen on /run/docker.sock" May 27 18:22:30.675050 systemd-timesyncd[1422]: Contacted time server 173.11.101.155:123 (2.flatcar.pool.ntp.org). May 27 18:22:30.675205 systemd-timesyncd[1422]: Initial clock synchronization to Tue 2025-05-27 18:22:31.050967 UTC. May 27 18:22:32.487789 containerd[1585]: time="2025-05-27T18:22:32.486637367Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 27 18:22:33.341351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4195943016.mount: Deactivated successfully. May 27 18:22:35.049752 containerd[1585]: time="2025-05-27T18:22:35.049045079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:22:35.051486 containerd[1585]: time="2025-05-27T18:22:35.050206614Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=28078853" May 27 18:22:35.053025 containerd[1585]: time="2025-05-27T18:22:35.052980176Z" level=info msg="ImageCreate event name:\"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:22:35.057042 containerd[1585]: time="2025-05-27T18:22:35.056974591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:22:35.058292 containerd[1585]: time="2025-05-27T18:22:35.058204867Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"28075645\" in 2.570881507s" May 27 18:22:35.058356 containerd[1585]: time="2025-05-27T18:22:35.058299664Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 27 18:22:35.059805 containerd[1585]: time="2025-05-27T18:22:35.059768959Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 27 18:22:35.226995 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 27 18:22:35.241099 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:22:35.708524 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:22:35.727360 (kubelet)[2110]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 18:22:35.934535 kubelet[2110]: E0527 18:22:35.934340 2110 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 18:22:35.940202 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 18:22:35.940585 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 18:22:35.942265 systemd[1]: kubelet.service: Consumed 513ms CPU time, 110.4M memory peak. May 27 18:22:37.642093 containerd[1585]: time="2025-05-27T18:22:37.641982746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:22:37.644650 containerd[1585]: time="2025-05-27T18:22:37.644616679Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=24713530" May 27 18:22:37.645876 containerd[1585]: time="2025-05-27T18:22:37.645831481Z" level=info msg="ImageCreate event name:\"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:22:37.649888 containerd[1585]: time="2025-05-27T18:22:37.649840966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:22:37.650733 containerd[1585]: time="2025-05-27T18:22:37.650513724Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"26315362\" in 2.590528629s" May 27 18:22:37.650733 containerd[1585]: time="2025-05-27T18:22:37.650564354Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 27 18:22:37.651407 containerd[1585]: time="2025-05-27T18:22:37.651276221Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 27 18:22:39.618183 containerd[1585]: time="2025-05-27T18:22:39.618073180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:22:39.620633 containerd[1585]: time="2025-05-27T18:22:39.620562283Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=18784319" May 27 18:22:39.621664 containerd[1585]: time="2025-05-27T18:22:39.621616450Z" level=info msg="ImageCreate event name:\"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:22:39.625238 containerd[1585]: time="2025-05-27T18:22:39.625184796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:22:39.627176 containerd[1585]: time="2025-05-27T18:22:39.626647353Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"20386169\" in 1.975306343s" May 27 18:22:39.627176 containerd[1585]: time="2025-05-27T18:22:39.626724422Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 27 18:22:39.628657 containerd[1585]: time="2025-05-27T18:22:39.628629241Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 27 18:22:41.017195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1126804840.mount: Deactivated successfully. May 27 18:22:41.592994 containerd[1585]: time="2025-05-27T18:22:41.592940984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:22:41.594248 containerd[1585]: time="2025-05-27T18:22:41.594208133Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=30355631" May 27 18:22:41.595449 containerd[1585]: time="2025-05-27T18:22:41.595398188Z" level=info msg="ImageCreate event name:\"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:22:41.598168 containerd[1585]: time="2025-05-27T18:22:41.598058105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:22:41.598810 containerd[1585]: time="2025-05-27T18:22:41.598569374Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"30354642\" in 1.969780325s" May 27 18:22:41.598810 containerd[1585]: time="2025-05-27T18:22:41.598616050Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 27 18:22:41.599648 containerd[1585]: time="2025-05-27T18:22:41.599603776Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 27 18:22:42.326684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1552186555.mount: Deactivated successfully. May 27 18:22:43.836678 containerd[1585]: time="2025-05-27T18:22:43.836554786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:22:43.839231 containerd[1585]: time="2025-05-27T18:22:43.838781577Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" May 27 18:22:43.841110 containerd[1585]: time="2025-05-27T18:22:43.840992075Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:22:43.848839 containerd[1585]: time="2025-05-27T18:22:43.847904376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:22:43.851151 containerd[1585]: time="2025-05-27T18:22:43.851088594Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.251221582s" May 27 18:22:43.852014 containerd[1585]: time="2025-05-27T18:22:43.851419930Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 27 18:22:43.853726 containerd[1585]: time="2025-05-27T18:22:43.853629588Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 18:22:43.884781 update_engine[1501]: I20250527 18:22:43.883457 1501 update_attempter.cc:509] Updating boot flags... May 27 18:22:44.490984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4158817282.mount: Deactivated successfully. May 27 18:22:44.507750 containerd[1585]: time="2025-05-27T18:22:44.507574320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 18:22:44.511155 containerd[1585]: time="2025-05-27T18:22:44.511074536Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" May 27 18:22:44.513725 containerd[1585]: time="2025-05-27T18:22:44.513563009Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 18:22:44.519025 containerd[1585]: time="2025-05-27T18:22:44.518836647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 18:22:44.521680 containerd[1585]: time="2025-05-27T18:22:44.520769720Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 667.023644ms" May 27 18:22:44.521680 containerd[1585]: time="2025-05-27T18:22:44.520841221Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 27 18:22:44.522366 containerd[1585]: time="2025-05-27T18:22:44.522296212Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 27 18:22:45.239543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount645587490.mount: Deactivated successfully. May 27 18:22:45.975674 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 27 18:22:45.981243 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:22:46.487247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:22:46.504283 (kubelet)[2263]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 18:22:46.729895 kubelet[2263]: E0527 18:22:46.729765 2263 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 18:22:46.738181 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 18:22:46.738541 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 18:22:46.740109 systemd[1]: kubelet.service: Consumed 435ms CPU time, 108.9M memory peak. May 27 18:22:48.436087 containerd[1585]: time="2025-05-27T18:22:48.435942365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:22:48.438560 containerd[1585]: time="2025-05-27T18:22:48.438183509Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780021" May 27 18:22:48.439918 containerd[1585]: time="2025-05-27T18:22:48.439875782Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:22:48.443799 containerd[1585]: time="2025-05-27T18:22:48.443757443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:22:48.445047 containerd[1585]: time="2025-05-27T18:22:48.445013544Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.922658869s" May 27 18:22:48.445102 containerd[1585]: time="2025-05-27T18:22:48.445050682Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 27 18:22:52.939304 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:22:52.939785 systemd[1]: kubelet.service: Consumed 435ms CPU time, 108.9M memory peak. May 27 18:22:52.945529 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:22:53.009575 systemd[1]: Reload requested from client PID 2303 ('systemctl') (unit session-11.scope)... May 27 18:22:53.009778 systemd[1]: Reloading... May 27 18:22:53.126815 zram_generator::config[2348]: No configuration found. May 27 18:22:53.270101 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 18:22:53.417440 systemd[1]: Reloading finished in 407 ms. May 27 18:22:53.760515 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 18:22:53.760806 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 18:22:53.761633 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:22:53.761853 systemd[1]: kubelet.service: Consumed 213ms CPU time, 87.5M memory peak. May 27 18:22:53.767885 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:22:54.322944 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:22:54.352843 (kubelet)[2412]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 18:22:54.450502 kubelet[2412]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 18:22:54.452713 kubelet[2412]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 27 18:22:54.452713 kubelet[2412]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 18:22:54.452713 kubelet[2412]: I0527 18:22:54.451524 2412 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 18:22:55.101176 kubelet[2412]: I0527 18:22:55.101109 2412 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 27 18:22:55.101176 kubelet[2412]: I0527 18:22:55.101168 2412 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 18:22:55.101818 kubelet[2412]: I0527 18:22:55.101782 2412 server.go:934] "Client rotation is on, will bootstrap in background" May 27 18:22:55.133007 kubelet[2412]: E0527 18:22:55.132961 2412 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.229:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.229:6443: connect: connection refused" logger="UnhandledError" May 27 18:22:55.138714 kubelet[2412]: I0527 18:22:55.138015 2412 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 18:22:55.152630 kubelet[2412]: I0527 18:22:55.152588 2412 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 18:22:55.163874 kubelet[2412]: I0527 18:22:55.163847 2412 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 18:22:55.164149 kubelet[2412]: I0527 18:22:55.164135 2412 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 27 18:22:55.164431 kubelet[2412]: I0527 18:22:55.164397 2412 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 18:22:55.164790 kubelet[2412]: I0527 18:22:55.164501 2412 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344-0-0-3-6dd1c807ec.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 18:22:55.165088 kubelet[2412]: I0527 18:22:55.165073 2412 topology_manager.go:138] "Creating topology manager with none policy" May 27 18:22:55.165167 kubelet[2412]: I0527 18:22:55.165158 2412 container_manager_linux.go:300] "Creating device plugin manager" May 27 18:22:55.165380 kubelet[2412]: I0527 18:22:55.165365 2412 state_mem.go:36] "Initialized new in-memory state store" May 27 18:22:55.169237 kubelet[2412]: I0527 18:22:55.169221 2412 kubelet.go:408] "Attempting to sync node with API server" May 27 18:22:55.169334 kubelet[2412]: I0527 18:22:55.169322 2412 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 18:22:55.169453 kubelet[2412]: I0527 18:22:55.169443 2412 kubelet.go:314] "Adding apiserver pod source" May 27 18:22:55.169574 kubelet[2412]: I0527 18:22:55.169561 2412 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 18:22:55.174967 kubelet[2412]: W0527 18:22:55.174573 2412 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.229:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344-0-0-3-6dd1c807ec.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.229:6443: connect: connection refused May 27 18:22:55.174967 kubelet[2412]: E0527 18:22:55.174904 2412 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.229:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344-0-0-3-6dd1c807ec.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.229:6443: connect: connection refused" logger="UnhandledError" May 27 18:22:55.175162 kubelet[2412]: I0527 18:22:55.175130 2412 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 18:22:55.176229 kubelet[2412]: I0527 18:22:55.176194 2412 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 18:22:55.176400 kubelet[2412]: W0527 18:22:55.176378 2412 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 18:22:55.180568 kubelet[2412]: W0527 18:22:55.180512 2412 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.229:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.229:6443: connect: connection refused May 27 18:22:55.180765 kubelet[2412]: E0527 18:22:55.180744 2412 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.229:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.229:6443: connect: connection refused" logger="UnhandledError" May 27 18:22:55.183805 kubelet[2412]: I0527 18:22:55.183779 2412 server.go:1274] "Started kubelet" May 27 18:22:55.187010 kubelet[2412]: I0527 18:22:55.186962 2412 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 18:22:55.193152 kubelet[2412]: E0527 18:22:55.190410 2412 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.229:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.229:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344-0-0-3-6dd1c807ec.novalocal.184375693bb5f5eb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344-0-0-3-6dd1c807ec.novalocal,UID:ci-4344-0-0-3-6dd1c807ec.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344-0-0-3-6dd1c807ec.novalocal,},FirstTimestamp:2025-05-27 18:22:55.183681003 +0000 UTC m=+0.820229292,LastTimestamp:2025-05-27 18:22:55.183681003 +0000 UTC m=+0.820229292,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344-0-0-3-6dd1c807ec.novalocal,}" May 27 18:22:55.196529 kubelet[2412]: I0527 18:22:55.196244 2412 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 27 18:22:55.196959 kubelet[2412]: I0527 18:22:55.196929 2412 volume_manager.go:289] "Starting Kubelet Volume Manager" May 27 18:22:55.197432 kubelet[2412]: E0527 18:22:55.197396 2412 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344-0-0-3-6dd1c807ec.novalocal\" not found" May 27 18:22:55.198234 kubelet[2412]: I0527 18:22:55.198215 2412 server.go:449] "Adding debug handlers to kubelet server" May 27 18:22:55.199529 kubelet[2412]: I0527 18:22:55.199510 2412 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 27 18:22:55.201476 kubelet[2412]: I0527 18:22:55.201433 2412 reconciler.go:26] "Reconciler: start to sync state" May 27 18:22:55.206299 kubelet[2412]: I0527 18:22:55.205668 2412 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 18:22:55.206299 kubelet[2412]: I0527 18:22:55.205993 2412 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 18:22:55.206299 kubelet[2412]: W0527 18:22:55.206141 2412 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.229:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.229:6443: connect: connection refused May 27 18:22:55.206299 kubelet[2412]: E0527 18:22:55.206236 2412 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.229:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.229:6443: connect: connection refused" logger="UnhandledError" May 27 18:22:55.207066 kubelet[2412]: I0527 18:22:55.207022 2412 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 18:22:55.207271 kubelet[2412]: I0527 18:22:55.207254 2412 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 18:22:55.214187 kubelet[2412]: E0527 18:22:55.213595 2412 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.229:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344-0-0-3-6dd1c807ec.novalocal?timeout=10s\": dial tcp 172.24.4.229:6443: connect: connection refused" interval="200ms" May 27 18:22:55.222515 kubelet[2412]: I0527 18:22:55.220645 2412 factory.go:221] Registration of the containerd container factory successfully May 27 18:22:55.222515 kubelet[2412]: I0527 18:22:55.220673 2412 factory.go:221] Registration of the systemd container factory successfully May 27 18:22:55.222515 kubelet[2412]: E0527 18:22:55.220724 2412 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 18:22:55.232451 kubelet[2412]: I0527 18:22:55.232406 2412 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 18:22:55.233761 kubelet[2412]: I0527 18:22:55.233745 2412 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 18:22:55.233933 kubelet[2412]: I0527 18:22:55.233919 2412 status_manager.go:217] "Starting to sync pod status with apiserver" May 27 18:22:55.234048 kubelet[2412]: I0527 18:22:55.234037 2412 kubelet.go:2321] "Starting kubelet main sync loop" May 27 18:22:55.234180 kubelet[2412]: E0527 18:22:55.234148 2412 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 18:22:55.238753 kubelet[2412]: W0527 18:22:55.238655 2412 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.229:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.229:6443: connect: connection refused May 27 18:22:55.238934 kubelet[2412]: E0527 18:22:55.238911 2412 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.229:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.229:6443: connect: connection refused" logger="UnhandledError" May 27 18:22:55.252496 kubelet[2412]: I0527 18:22:55.252469 2412 cpu_manager.go:214] "Starting CPU manager" policy="none" May 27 18:22:55.252744 kubelet[2412]: I0527 18:22:55.252731 2412 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 27 18:22:55.252875 kubelet[2412]: I0527 18:22:55.252863 2412 state_mem.go:36] "Initialized new in-memory state store" May 27 18:22:55.259464 kubelet[2412]: I0527 18:22:55.259446 2412 policy_none.go:49] "None policy: Start" May 27 18:22:55.260676 kubelet[2412]: I0527 18:22:55.260646 2412 memory_manager.go:170] "Starting memorymanager" policy="None" May 27 18:22:55.260774 kubelet[2412]: I0527 18:22:55.260756 2412 state_mem.go:35] "Initializing new in-memory state store" May 27 18:22:55.272994 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 18:22:55.285391 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 18:22:55.298033 kubelet[2412]: E0527 18:22:55.298010 2412 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344-0-0-3-6dd1c807ec.novalocal\" not found" May 27 18:22:55.301874 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 18:22:55.304281 kubelet[2412]: I0527 18:22:55.303807 2412 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 18:22:55.304281 kubelet[2412]: I0527 18:22:55.304005 2412 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 18:22:55.304281 kubelet[2412]: I0527 18:22:55.304035 2412 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 18:22:55.306559 kubelet[2412]: I0527 18:22:55.306542 2412 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 18:22:55.308822 kubelet[2412]: E0527 18:22:55.308800 2412 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344-0-0-3-6dd1c807ec.novalocal\" not found" May 27 18:22:55.361069 systemd[1]: Created slice kubepods-burstable-pod2d84684fb6fbc47c729f91b5e80ff045.slice - libcontainer container kubepods-burstable-pod2d84684fb6fbc47c729f91b5e80ff045.slice. May 27 18:22:55.383082 systemd[1]: Created slice kubepods-burstable-pod853c062c0257438f2171db804bd8c0e8.slice - libcontainer container kubepods-burstable-pod853c062c0257438f2171db804bd8c0e8.slice. May 27 18:22:55.393536 systemd[1]: Created slice kubepods-burstable-pod057cd7ec800838b449697b4d1e8b808c.slice - libcontainer container kubepods-burstable-pod057cd7ec800838b449697b4d1e8b808c.slice. May 27 18:22:55.402923 kubelet[2412]: I0527 18:22:55.402814 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/057cd7ec800838b449697b4d1e8b808c-kubeconfig\") pod \"kube-controller-manager-ci-4344-0-0-3-6dd1c807ec.novalocal\" (UID: \"057cd7ec800838b449697b4d1e8b808c\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:22:55.402923 kubelet[2412]: I0527 18:22:55.402911 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2d84684fb6fbc47c729f91b5e80ff045-kubeconfig\") pod \"kube-scheduler-ci-4344-0-0-3-6dd1c807ec.novalocal\" (UID: \"2d84684fb6fbc47c729f91b5e80ff045\") " pod="kube-system/kube-scheduler-ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:22:55.403221 kubelet[2412]: I0527 18:22:55.402964 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/853c062c0257438f2171db804bd8c0e8-k8s-certs\") pod \"kube-apiserver-ci-4344-0-0-3-6dd1c807ec.novalocal\" (UID: \"853c062c0257438f2171db804bd8c0e8\") " pod="kube-system/kube-apiserver-ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:22:55.403221 kubelet[2412]: I0527 18:22:55.403011 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/853c062c0257438f2171db804bd8c0e8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344-0-0-3-6dd1c807ec.novalocal\" (UID: \"853c062c0257438f2171db804bd8c0e8\") " pod="kube-system/kube-apiserver-ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:22:55.403221 kubelet[2412]: I0527 18:22:55.403059 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/057cd7ec800838b449697b4d1e8b808c-k8s-certs\") pod \"kube-controller-manager-ci-4344-0-0-3-6dd1c807ec.novalocal\" (UID: \"057cd7ec800838b449697b4d1e8b808c\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:22:55.403221 kubelet[2412]: I0527 18:22:55.403104 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/853c062c0257438f2171db804bd8c0e8-ca-certs\") pod \"kube-apiserver-ci-4344-0-0-3-6dd1c807ec.novalocal\" (UID: \"853c062c0257438f2171db804bd8c0e8\") " pod="kube-system/kube-apiserver-ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:22:55.403221 kubelet[2412]: I0527 18:22:55.403153 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/057cd7ec800838b449697b4d1e8b808c-ca-certs\") pod \"kube-controller-manager-ci-4344-0-0-3-6dd1c807ec.novalocal\" (UID: \"057cd7ec800838b449697b4d1e8b808c\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:22:55.403534 kubelet[2412]: I0527 18:22:55.403197 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/057cd7ec800838b449697b4d1e8b808c-flexvolume-dir\") pod \"kube-controller-manager-ci-4344-0-0-3-6dd1c807ec.novalocal\" (UID: \"057cd7ec800838b449697b4d1e8b808c\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:22:55.403534 kubelet[2412]: I0527 18:22:55.403242 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/057cd7ec800838b449697b4d1e8b808c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344-0-0-3-6dd1c807ec.novalocal\" (UID: \"057cd7ec800838b449697b4d1e8b808c\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:22:55.407703 kubelet[2412]: I0527 18:22:55.407589 2412 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:22:55.408809 kubelet[2412]: E0527 18:22:55.408721 2412 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.229:6443/api/v1/nodes\": dial tcp 172.24.4.229:6443: connect: connection refused" node="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:22:55.415159 kubelet[2412]: E0527 18:22:55.415093 2412 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.229:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344-0-0-3-6dd1c807ec.novalocal?timeout=10s\": dial tcp 172.24.4.229:6443: connect: connection refused" interval="400ms" May 27 18:22:55.614167 kubelet[2412]: I0527 18:22:55.613969 2412 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:22:55.616227 kubelet[2412]: E0527 18:22:55.615515 2412 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.229:6443/api/v1/nodes\": dial tcp 172.24.4.229:6443: connect: connection refused" node="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:22:55.678757 containerd[1585]: time="2025-05-27T18:22:55.678279693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344-0-0-3-6dd1c807ec.novalocal,Uid:2d84684fb6fbc47c729f91b5e80ff045,Namespace:kube-system,Attempt:0,}" May 27 18:22:55.691024 containerd[1585]: time="2025-05-27T18:22:55.690356900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344-0-0-3-6dd1c807ec.novalocal,Uid:853c062c0257438f2171db804bd8c0e8,Namespace:kube-system,Attempt:0,}" May 27 18:22:55.714192 containerd[1585]: time="2025-05-27T18:22:55.714102456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344-0-0-3-6dd1c807ec.novalocal,Uid:057cd7ec800838b449697b4d1e8b808c,Namespace:kube-system,Attempt:0,}" May 27 18:22:55.802709 containerd[1585]: time="2025-05-27T18:22:55.802582044Z" level=info msg="connecting to shim 2e4a8bbefb92b10fcb94f5c9dd0c07aac88e77d585eeaad5ad3ebd5c3d6d93fe" address="unix:///run/containerd/s/703bab65a2defc3c0eae9ffa21cad12ec2395f874915b3939c8f35b2516012ee" namespace=k8s.io protocol=ttrpc version=3 May 27 18:22:55.808154 containerd[1585]: time="2025-05-27T18:22:55.808076798Z" level=info msg="connecting to shim 24ef6ca9a2764ee91f762a086bbdb668c59a38f4f0ad06736f0de75be90d51f3" address="unix:///run/containerd/s/14c68b3ebeb75594a45a46afc0b14341c41647aeebfd92039eede8bf7a37b9c7" namespace=k8s.io protocol=ttrpc version=3 May 27 18:22:55.819902 kubelet[2412]: E0527 18:22:55.819843 2412 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.229:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344-0-0-3-6dd1c807ec.novalocal?timeout=10s\": dial tcp 172.24.4.229:6443: connect: connection refused" interval="800ms" May 27 18:22:55.833912 containerd[1585]: time="2025-05-27T18:22:55.833757714Z" level=info msg="connecting to shim 1b98f61f39d5c2d5e57f513ca1220d733ed1eb1f4cf4ed99311192a6b08a46e5" address="unix:///run/containerd/s/a99774333ab686d37221c78d3981eacb9814f0b1f4fedfc9a4cfe8d21940318e" namespace=k8s.io protocol=ttrpc version=3 May 27 18:22:55.853944 systemd[1]: Started cri-containerd-2e4a8bbefb92b10fcb94f5c9dd0c07aac88e77d585eeaad5ad3ebd5c3d6d93fe.scope - libcontainer container 2e4a8bbefb92b10fcb94f5c9dd0c07aac88e77d585eeaad5ad3ebd5c3d6d93fe. May 27 18:22:55.861565 systemd[1]: Started cri-containerd-24ef6ca9a2764ee91f762a086bbdb668c59a38f4f0ad06736f0de75be90d51f3.scope - libcontainer container 24ef6ca9a2764ee91f762a086bbdb668c59a38f4f0ad06736f0de75be90d51f3. May 27 18:22:55.875953 systemd[1]: Started cri-containerd-1b98f61f39d5c2d5e57f513ca1220d733ed1eb1f4cf4ed99311192a6b08a46e5.scope - libcontainer container 1b98f61f39d5c2d5e57f513ca1220d733ed1eb1f4cf4ed99311192a6b08a46e5. May 27 18:22:55.962394 containerd[1585]: time="2025-05-27T18:22:55.962231821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344-0-0-3-6dd1c807ec.novalocal,Uid:853c062c0257438f2171db804bd8c0e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"24ef6ca9a2764ee91f762a086bbdb668c59a38f4f0ad06736f0de75be90d51f3\"" May 27 18:22:55.966329 containerd[1585]: time="2025-05-27T18:22:55.966227683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344-0-0-3-6dd1c807ec.novalocal,Uid:2d84684fb6fbc47c729f91b5e80ff045,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e4a8bbefb92b10fcb94f5c9dd0c07aac88e77d585eeaad5ad3ebd5c3d6d93fe\"" May 27 18:22:55.967964 containerd[1585]: time="2025-05-27T18:22:55.967910871Z" level=info msg="CreateContainer within sandbox \"24ef6ca9a2764ee91f762a086bbdb668c59a38f4f0ad06736f0de75be90d51f3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 18:22:55.970357 containerd[1585]: time="2025-05-27T18:22:55.969932497Z" level=info msg="CreateContainer within sandbox \"2e4a8bbefb92b10fcb94f5c9dd0c07aac88e77d585eeaad5ad3ebd5c3d6d93fe\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 18:22:55.985763 containerd[1585]: time="2025-05-27T18:22:55.985724513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344-0-0-3-6dd1c807ec.novalocal,Uid:057cd7ec800838b449697b4d1e8b808c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b98f61f39d5c2d5e57f513ca1220d733ed1eb1f4cf4ed99311192a6b08a46e5\"" May 27 18:22:55.989311 containerd[1585]: time="2025-05-27T18:22:55.989246988Z" level=info msg="CreateContainer within sandbox \"1b98f61f39d5c2d5e57f513ca1220d733ed1eb1f4cf4ed99311192a6b08a46e5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 18:22:55.993714 containerd[1585]: time="2025-05-27T18:22:55.993649345Z" level=info msg="Container 1a1b2d49c5973cd69fce25e3e4ba8ef7db6bcdb9991b398a2d5b8433f366265a: CDI devices from CRI Config.CDIDevices: []" May 27 18:22:56.005382 containerd[1585]: time="2025-05-27T18:22:56.005296765Z" level=info msg="Container bd467ebb8fdcfd08c6eead50282dec48c1592bf2e730666a8c091b8e57cd7e30: CDI devices from CRI Config.CDIDevices: []" May 27 18:22:56.015839 containerd[1585]: time="2025-05-27T18:22:56.015702959Z" level=info msg="CreateContainer within sandbox \"2e4a8bbefb92b10fcb94f5c9dd0c07aac88e77d585eeaad5ad3ebd5c3d6d93fe\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1a1b2d49c5973cd69fce25e3e4ba8ef7db6bcdb9991b398a2d5b8433f366265a\"" May 27 18:22:56.017293 containerd[1585]: time="2025-05-27T18:22:56.017247186Z" level=info msg="StartContainer for \"1a1b2d49c5973cd69fce25e3e4ba8ef7db6bcdb9991b398a2d5b8433f366265a\"" May 27 18:22:56.019715 kubelet[2412]: I0527 18:22:56.019474 2412 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:22:56.023399 kubelet[2412]: E0527 18:22:56.020006 2412 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.229:6443/api/v1/nodes\": dial tcp 172.24.4.229:6443: connect: connection refused" node="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:22:56.023467 containerd[1585]: time="2025-05-27T18:22:56.023130380Z" level=info msg="connecting to shim 1a1b2d49c5973cd69fce25e3e4ba8ef7db6bcdb9991b398a2d5b8433f366265a" address="unix:///run/containerd/s/703bab65a2defc3c0eae9ffa21cad12ec2395f874915b3939c8f35b2516012ee" protocol=ttrpc version=3 May 27 18:22:56.023670 containerd[1585]: time="2025-05-27T18:22:56.023641028Z" level=info msg="Container d8334afa42bcc75ee9142ee63811d0e83f2ba68ad2d092c46eb24065649abd3e: CDI devices from CRI Config.CDIDevices: []" May 27 18:22:56.034081 containerd[1585]: time="2025-05-27T18:22:56.034001240Z" level=info msg="CreateContainer within sandbox \"24ef6ca9a2764ee91f762a086bbdb668c59a38f4f0ad06736f0de75be90d51f3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bd467ebb8fdcfd08c6eead50282dec48c1592bf2e730666a8c091b8e57cd7e30\"" May 27 18:22:56.035257 containerd[1585]: time="2025-05-27T18:22:56.035107617Z" level=info msg="StartContainer for \"bd467ebb8fdcfd08c6eead50282dec48c1592bf2e730666a8c091b8e57cd7e30\"" May 27 18:22:56.037874 containerd[1585]: time="2025-05-27T18:22:56.037825597Z" level=info msg="connecting to shim bd467ebb8fdcfd08c6eead50282dec48c1592bf2e730666a8c091b8e57cd7e30" address="unix:///run/containerd/s/14c68b3ebeb75594a45a46afc0b14341c41647aeebfd92039eede8bf7a37b9c7" protocol=ttrpc version=3 May 27 18:22:56.042332 containerd[1585]: time="2025-05-27T18:22:56.042289940Z" level=info msg="CreateContainer within sandbox \"1b98f61f39d5c2d5e57f513ca1220d733ed1eb1f4cf4ed99311192a6b08a46e5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d8334afa42bcc75ee9142ee63811d0e83f2ba68ad2d092c46eb24065649abd3e\"" May 27 18:22:56.045342 containerd[1585]: time="2025-05-27T18:22:56.045311846Z" level=info msg="StartContainer for \"d8334afa42bcc75ee9142ee63811d0e83f2ba68ad2d092c46eb24065649abd3e\"" May 27 18:22:56.046867 containerd[1585]: time="2025-05-27T18:22:56.046841943Z" level=info msg="connecting to shim d8334afa42bcc75ee9142ee63811d0e83f2ba68ad2d092c46eb24065649abd3e" address="unix:///run/containerd/s/a99774333ab686d37221c78d3981eacb9814f0b1f4fedfc9a4cfe8d21940318e" protocol=ttrpc version=3 May 27 18:22:56.048886 systemd[1]: Started cri-containerd-1a1b2d49c5973cd69fce25e3e4ba8ef7db6bcdb9991b398a2d5b8433f366265a.scope - libcontainer container 1a1b2d49c5973cd69fce25e3e4ba8ef7db6bcdb9991b398a2d5b8433f366265a. May 27 18:22:56.070985 systemd[1]: Started cri-containerd-bd467ebb8fdcfd08c6eead50282dec48c1592bf2e730666a8c091b8e57cd7e30.scope - libcontainer container bd467ebb8fdcfd08c6eead50282dec48c1592bf2e730666a8c091b8e57cd7e30. May 27 18:22:56.071427 kubelet[2412]: W0527 18:22:56.071121 2412 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.229:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.229:6443: connect: connection refused May 27 18:22:56.071427 kubelet[2412]: E0527 18:22:56.071194 2412 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.229:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.229:6443: connect: connection refused" logger="UnhandledError" May 27 18:22:56.075364 systemd[1]: Started cri-containerd-d8334afa42bcc75ee9142ee63811d0e83f2ba68ad2d092c46eb24065649abd3e.scope - libcontainer container d8334afa42bcc75ee9142ee63811d0e83f2ba68ad2d092c46eb24065649abd3e. May 27 18:22:56.143985 kubelet[2412]: W0527 18:22:56.143173 2412 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.229:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344-0-0-3-6dd1c807ec.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.229:6443: connect: connection refused May 27 18:22:56.143985 kubelet[2412]: E0527 18:22:56.143803 2412 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.229:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344-0-0-3-6dd1c807ec.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.229:6443: connect: connection refused" logger="UnhandledError" May 27 18:22:56.155343 containerd[1585]: time="2025-05-27T18:22:56.155232554Z" level=info msg="StartContainer for \"1a1b2d49c5973cd69fce25e3e4ba8ef7db6bcdb9991b398a2d5b8433f366265a\" returns successfully" May 27 18:22:56.163665 containerd[1585]: time="2025-05-27T18:22:56.163286232Z" level=info msg="StartContainer for \"bd467ebb8fdcfd08c6eead50282dec48c1592bf2e730666a8c091b8e57cd7e30\" returns successfully" May 27 18:22:56.182214 kubelet[2412]: W0527 18:22:56.182118 2412 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.229:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.229:6443: connect: connection refused May 27 18:22:56.182214 kubelet[2412]: E0527 18:22:56.182202 2412 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.229:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.229:6443: connect: connection refused" logger="UnhandledError" May 27 18:22:56.208364 containerd[1585]: time="2025-05-27T18:22:56.208315061Z" level=info msg="StartContainer for \"d8334afa42bcc75ee9142ee63811d0e83f2ba68ad2d092c46eb24065649abd3e\" returns successfully" May 27 18:22:56.823698 kubelet[2412]: I0527 18:22:56.822713 2412 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:22:57.900011 kubelet[2412]: E0527 18:22:57.899954 2412 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344-0-0-3-6dd1c807ec.novalocal\" not found" node="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:22:57.926026 kubelet[2412]: I0527 18:22:57.925981 2412 kubelet_node_status.go:75] "Successfully registered node" node="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:22:57.926333 kubelet[2412]: E0527 18:22:57.926184 2412 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4344-0-0-3-6dd1c807ec.novalocal\": node \"ci-4344-0-0-3-6dd1c807ec.novalocal\" not found" May 27 18:22:58.183713 kubelet[2412]: I0527 18:22:58.181920 2412 apiserver.go:52] "Watching apiserver" May 27 18:22:58.202102 kubelet[2412]: I0527 18:22:58.202041 2412 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 27 18:22:58.269063 kubelet[2412]: E0527 18:22:58.268940 2412 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4344-0-0-3-6dd1c807ec.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:22:58.790720 kubelet[2412]: W0527 18:22:58.790631 2412 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 27 18:23:00.513541 systemd[1]: Reload requested from client PID 2685 ('systemctl') (unit session-11.scope)... May 27 18:23:00.513652 systemd[1]: Reloading... May 27 18:23:00.629731 zram_generator::config[2742]: No configuration found. May 27 18:23:00.743221 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 18:23:00.906866 systemd[1]: Reloading finished in 392 ms. May 27 18:23:00.936837 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:23:00.951274 systemd[1]: kubelet.service: Deactivated successfully. May 27 18:23:00.951559 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:23:00.951638 systemd[1]: kubelet.service: Consumed 1.546s CPU time, 130.3M memory peak. May 27 18:23:00.955205 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:23:01.352502 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:23:01.361744 (kubelet)[2794]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 18:23:01.452239 kubelet[2794]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 18:23:01.452239 kubelet[2794]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 27 18:23:01.452239 kubelet[2794]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 18:23:01.452239 kubelet[2794]: I0527 18:23:01.450446 2794 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 18:23:01.459448 kubelet[2794]: I0527 18:23:01.459403 2794 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 27 18:23:01.459448 kubelet[2794]: I0527 18:23:01.459433 2794 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 18:23:01.459896 kubelet[2794]: I0527 18:23:01.459802 2794 server.go:934] "Client rotation is on, will bootstrap in background" May 27 18:23:01.462037 kubelet[2794]: I0527 18:23:01.462009 2794 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 27 18:23:01.465623 kubelet[2794]: I0527 18:23:01.464793 2794 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 18:23:01.474265 kubelet[2794]: I0527 18:23:01.474228 2794 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 18:23:01.481409 kubelet[2794]: I0527 18:23:01.481358 2794 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 18:23:01.481837 kubelet[2794]: I0527 18:23:01.481511 2794 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 27 18:23:01.481837 kubelet[2794]: I0527 18:23:01.481620 2794 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 18:23:01.482085 kubelet[2794]: I0527 18:23:01.481653 2794 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344-0-0-3-6dd1c807ec.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 18:23:01.482085 kubelet[2794]: I0527 18:23:01.481942 2794 topology_manager.go:138] "Creating topology manager with none policy" May 27 18:23:01.482085 kubelet[2794]: I0527 18:23:01.481955 2794 container_manager_linux.go:300] "Creating device plugin manager" May 27 18:23:01.482085 kubelet[2794]: I0527 18:23:01.482031 2794 state_mem.go:36] "Initialized new in-memory state store" May 27 18:23:01.482900 kubelet[2794]: I0527 18:23:01.482377 2794 kubelet.go:408] "Attempting to sync node with API server" May 27 18:23:01.483582 kubelet[2794]: I0527 18:23:01.482400 2794 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 18:23:01.483582 kubelet[2794]: I0527 18:23:01.483092 2794 kubelet.go:314] "Adding apiserver pod source" May 27 18:23:01.484437 kubelet[2794]: I0527 18:23:01.484139 2794 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 18:23:01.490726 kubelet[2794]: I0527 18:23:01.488207 2794 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 18:23:01.490726 kubelet[2794]: I0527 18:23:01.489471 2794 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 18:23:01.493323 kubelet[2794]: I0527 18:23:01.493306 2794 server.go:1274] "Started kubelet" May 27 18:23:01.497411 kubelet[2794]: I0527 18:23:01.497389 2794 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 18:23:01.498892 kubelet[2794]: I0527 18:23:01.498850 2794 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 27 18:23:01.500747 kubelet[2794]: I0527 18:23:01.500714 2794 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 18:23:01.505910 kubelet[2794]: I0527 18:23:01.505461 2794 volume_manager.go:289] "Starting Kubelet Volume Manager" May 27 18:23:01.506996 kubelet[2794]: E0527 18:23:01.506975 2794 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344-0-0-3-6dd1c807ec.novalocal\" not found" May 27 18:23:01.507935 kubelet[2794]: I0527 18:23:01.507909 2794 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 27 18:23:01.508170 kubelet[2794]: I0527 18:23:01.508155 2794 reconciler.go:26] "Reconciler: start to sync state" May 27 18:23:01.511266 kubelet[2794]: I0527 18:23:01.511249 2794 server.go:449] "Adding debug handlers to kubelet server" May 27 18:23:01.515503 kubelet[2794]: I0527 18:23:01.515468 2794 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 18:23:01.515826 kubelet[2794]: I0527 18:23:01.515810 2794 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 18:23:01.527181 kubelet[2794]: I0527 18:23:01.527147 2794 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 18:23:01.533767 kubelet[2794]: I0527 18:23:01.533432 2794 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 18:23:01.533767 kubelet[2794]: I0527 18:23:01.533491 2794 status_manager.go:217] "Starting to sync pod status with apiserver" May 27 18:23:01.533767 kubelet[2794]: I0527 18:23:01.533525 2794 kubelet.go:2321] "Starting kubelet main sync loop" May 27 18:23:01.533767 kubelet[2794]: E0527 18:23:01.533585 2794 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 18:23:01.551407 kubelet[2794]: I0527 18:23:01.551349 2794 factory.go:221] Registration of the containerd container factory successfully May 27 18:23:01.552057 kubelet[2794]: I0527 18:23:01.552043 2794 factory.go:221] Registration of the systemd container factory successfully May 27 18:23:01.552465 kubelet[2794]: I0527 18:23:01.552301 2794 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 18:23:01.578611 kubelet[2794]: E0527 18:23:01.578537 2794 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 18:23:01.628753 kubelet[2794]: I0527 18:23:01.627553 2794 cpu_manager.go:214] "Starting CPU manager" policy="none" May 27 18:23:01.628753 kubelet[2794]: I0527 18:23:01.627578 2794 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 27 18:23:01.628753 kubelet[2794]: I0527 18:23:01.627608 2794 state_mem.go:36] "Initialized new in-memory state store" May 27 18:23:01.628753 kubelet[2794]: I0527 18:23:01.627850 2794 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 18:23:01.628753 kubelet[2794]: I0527 18:23:01.628135 2794 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 18:23:01.628753 kubelet[2794]: I0527 18:23:01.628195 2794 policy_none.go:49] "None policy: Start" May 27 18:23:01.630094 kubelet[2794]: I0527 18:23:01.630075 2794 memory_manager.go:170] "Starting memorymanager" policy="None" May 27 18:23:01.630213 kubelet[2794]: I0527 18:23:01.630201 2794 state_mem.go:35] "Initializing new in-memory state store" May 27 18:23:01.630482 kubelet[2794]: I0527 18:23:01.630466 2794 state_mem.go:75] "Updated machine memory state" May 27 18:23:01.634051 kubelet[2794]: E0527 18:23:01.634014 2794 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 18:23:01.639315 kubelet[2794]: I0527 18:23:01.639278 2794 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 18:23:01.640560 kubelet[2794]: I0527 18:23:01.639456 2794 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 18:23:01.640560 kubelet[2794]: I0527 18:23:01.639482 2794 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 18:23:01.640560 kubelet[2794]: I0527 18:23:01.639799 2794 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 18:23:01.763633 kubelet[2794]: I0527 18:23:01.762888 2794 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:01.786288 kubelet[2794]: I0527 18:23:01.785912 2794 kubelet_node_status.go:111] "Node was previously registered" node="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:01.786288 kubelet[2794]: I0527 18:23:01.786207 2794 kubelet_node_status.go:75] "Successfully registered node" node="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:01.847407 kubelet[2794]: W0527 18:23:01.847312 2794 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 27 18:23:01.850665 kubelet[2794]: W0527 18:23:01.849778 2794 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 27 18:23:01.856356 kubelet[2794]: W0527 18:23:01.856011 2794 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 27 18:23:01.856356 kubelet[2794]: E0527 18:23:01.856087 2794 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4344-0-0-3-6dd1c807ec.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:01.911883 kubelet[2794]: I0527 18:23:01.911169 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/057cd7ec800838b449697b4d1e8b808c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344-0-0-3-6dd1c807ec.novalocal\" (UID: \"057cd7ec800838b449697b4d1e8b808c\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:01.911883 kubelet[2794]: I0527 18:23:01.911310 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2d84684fb6fbc47c729f91b5e80ff045-kubeconfig\") pod \"kube-scheduler-ci-4344-0-0-3-6dd1c807ec.novalocal\" (UID: \"2d84684fb6fbc47c729f91b5e80ff045\") " pod="kube-system/kube-scheduler-ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:01.911883 kubelet[2794]: I0527 18:23:01.911368 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/853c062c0257438f2171db804bd8c0e8-ca-certs\") pod \"kube-apiserver-ci-4344-0-0-3-6dd1c807ec.novalocal\" (UID: \"853c062c0257438f2171db804bd8c0e8\") " pod="kube-system/kube-apiserver-ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:01.911883 kubelet[2794]: I0527 18:23:01.911412 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/853c062c0257438f2171db804bd8c0e8-k8s-certs\") pod \"kube-apiserver-ci-4344-0-0-3-6dd1c807ec.novalocal\" (UID: \"853c062c0257438f2171db804bd8c0e8\") " pod="kube-system/kube-apiserver-ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:01.911883 kubelet[2794]: I0527 18:23:01.911460 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/057cd7ec800838b449697b4d1e8b808c-ca-certs\") pod \"kube-controller-manager-ci-4344-0-0-3-6dd1c807ec.novalocal\" (UID: \"057cd7ec800838b449697b4d1e8b808c\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:01.912574 kubelet[2794]: I0527 18:23:01.911508 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/057cd7ec800838b449697b4d1e8b808c-flexvolume-dir\") pod \"kube-controller-manager-ci-4344-0-0-3-6dd1c807ec.novalocal\" (UID: \"057cd7ec800838b449697b4d1e8b808c\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:01.912574 kubelet[2794]: I0527 18:23:01.911553 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/057cd7ec800838b449697b4d1e8b808c-k8s-certs\") pod \"kube-controller-manager-ci-4344-0-0-3-6dd1c807ec.novalocal\" (UID: \"057cd7ec800838b449697b4d1e8b808c\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:01.912574 kubelet[2794]: I0527 18:23:01.911598 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/853c062c0257438f2171db804bd8c0e8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344-0-0-3-6dd1c807ec.novalocal\" (UID: \"853c062c0257438f2171db804bd8c0e8\") " pod="kube-system/kube-apiserver-ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:01.912574 kubelet[2794]: I0527 18:23:01.911649 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/057cd7ec800838b449697b4d1e8b808c-kubeconfig\") pod \"kube-controller-manager-ci-4344-0-0-3-6dd1c807ec.novalocal\" (UID: \"057cd7ec800838b449697b4d1e8b808c\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:02.486709 kubelet[2794]: I0527 18:23:02.484521 2794 apiserver.go:52] "Watching apiserver" May 27 18:23:02.508849 kubelet[2794]: I0527 18:23:02.508788 2794 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 27 18:23:02.678722 kubelet[2794]: I0527 18:23:02.677495 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344-0-0-3-6dd1c807ec.novalocal" podStartSLOduration=4.67745281 podStartE2EDuration="4.67745281s" podCreationTimestamp="2025-05-27 18:22:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 18:23:02.656035481 +0000 UTC m=+1.283788546" watchObservedRunningTime="2025-05-27 18:23:02.67745281 +0000 UTC m=+1.305205866" May 27 18:23:02.708103 kubelet[2794]: I0527 18:23:02.708037 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344-0-0-3-6dd1c807ec.novalocal" podStartSLOduration=1.708010175 podStartE2EDuration="1.708010175s" podCreationTimestamp="2025-05-27 18:23:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 18:23:02.679079237 +0000 UTC m=+1.306832292" watchObservedRunningTime="2025-05-27 18:23:02.708010175 +0000 UTC m=+1.335763230" May 27 18:23:02.728454 kubelet[2794]: I0527 18:23:02.728394 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344-0-0-3-6dd1c807ec.novalocal" podStartSLOduration=1.7283687269999999 podStartE2EDuration="1.728368727s" podCreationTimestamp="2025-05-27 18:23:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 18:23:02.70991821 +0000 UTC m=+1.337671265" watchObservedRunningTime="2025-05-27 18:23:02.728368727 +0000 UTC m=+1.356121792" May 27 18:23:06.871074 kubelet[2794]: I0527 18:23:06.870557 2794 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 18:23:06.873349 containerd[1585]: time="2025-05-27T18:23:06.872361815Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 18:23:06.876889 kubelet[2794]: I0527 18:23:06.875553 2794 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 18:23:07.753622 kubelet[2794]: I0527 18:23:07.753537 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a62f07cc-629d-4330-af16-166f08a57ab5-xtables-lock\") pod \"kube-proxy-gs5rq\" (UID: \"a62f07cc-629d-4330-af16-166f08a57ab5\") " pod="kube-system/kube-proxy-gs5rq" May 27 18:23:07.753622 kubelet[2794]: I0527 18:23:07.753630 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a62f07cc-629d-4330-af16-166f08a57ab5-lib-modules\") pod \"kube-proxy-gs5rq\" (UID: \"a62f07cc-629d-4330-af16-166f08a57ab5\") " pod="kube-system/kube-proxy-gs5rq" May 27 18:23:07.754258 kubelet[2794]: I0527 18:23:07.753677 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a62f07cc-629d-4330-af16-166f08a57ab5-kube-proxy\") pod \"kube-proxy-gs5rq\" (UID: \"a62f07cc-629d-4330-af16-166f08a57ab5\") " pod="kube-system/kube-proxy-gs5rq" May 27 18:23:07.754258 kubelet[2794]: I0527 18:23:07.753775 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnrsf\" (UniqueName: \"kubernetes.io/projected/a62f07cc-629d-4330-af16-166f08a57ab5-kube-api-access-wnrsf\") pod \"kube-proxy-gs5rq\" (UID: \"a62f07cc-629d-4330-af16-166f08a57ab5\") " pod="kube-system/kube-proxy-gs5rq" May 27 18:23:07.760418 systemd[1]: Created slice kubepods-besteffort-poda62f07cc_629d_4330_af16_166f08a57ab5.slice - libcontainer container kubepods-besteffort-poda62f07cc_629d_4330_af16_166f08a57ab5.slice. May 27 18:23:07.920719 systemd[1]: Created slice kubepods-besteffort-pod2bd5b395_d567_40e0_bc07_187e428630a4.slice - libcontainer container kubepods-besteffort-pod2bd5b395_d567_40e0_bc07_187e428630a4.slice. May 27 18:23:07.955519 kubelet[2794]: I0527 18:23:07.955439 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2bd5b395-d567-40e0-bc07-187e428630a4-var-lib-calico\") pod \"tigera-operator-7c5755cdcb-7nz2w\" (UID: \"2bd5b395-d567-40e0-bc07-187e428630a4\") " pod="tigera-operator/tigera-operator-7c5755cdcb-7nz2w" May 27 18:23:07.955519 kubelet[2794]: I0527 18:23:07.955479 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqdll\" (UniqueName: \"kubernetes.io/projected/2bd5b395-d567-40e0-bc07-187e428630a4-kube-api-access-kqdll\") pod \"tigera-operator-7c5755cdcb-7nz2w\" (UID: \"2bd5b395-d567-40e0-bc07-187e428630a4\") " pod="tigera-operator/tigera-operator-7c5755cdcb-7nz2w" May 27 18:23:08.072454 containerd[1585]: time="2025-05-27T18:23:08.071457292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gs5rq,Uid:a62f07cc-629d-4330-af16-166f08a57ab5,Namespace:kube-system,Attempt:0,}" May 27 18:23:08.147169 containerd[1585]: time="2025-05-27T18:23:08.147047922Z" level=info msg="connecting to shim f420eec0ae2d172129d486f4cbf8be6f2136fd0074871c5a10e5f4d79e59a9a0" address="unix:///run/containerd/s/59398474465e2c25dbe734b4fd5f057f4e8e68e274fa506f6dea0be7ba4eb27c" namespace=k8s.io protocol=ttrpc version=3 May 27 18:23:08.196840 systemd[1]: Started cri-containerd-f420eec0ae2d172129d486f4cbf8be6f2136fd0074871c5a10e5f4d79e59a9a0.scope - libcontainer container f420eec0ae2d172129d486f4cbf8be6f2136fd0074871c5a10e5f4d79e59a9a0. May 27 18:23:08.226418 containerd[1585]: time="2025-05-27T18:23:08.226250872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-7nz2w,Uid:2bd5b395-d567-40e0-bc07-187e428630a4,Namespace:tigera-operator,Attempt:0,}" May 27 18:23:08.232055 containerd[1585]: time="2025-05-27T18:23:08.232003665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gs5rq,Uid:a62f07cc-629d-4330-af16-166f08a57ab5,Namespace:kube-system,Attempt:0,} returns sandbox id \"f420eec0ae2d172129d486f4cbf8be6f2136fd0074871c5a10e5f4d79e59a9a0\"" May 27 18:23:08.237125 containerd[1585]: time="2025-05-27T18:23:08.237084615Z" level=info msg="CreateContainer within sandbox \"f420eec0ae2d172129d486f4cbf8be6f2136fd0074871c5a10e5f4d79e59a9a0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 18:23:08.268292 containerd[1585]: time="2025-05-27T18:23:08.268156360Z" level=info msg="connecting to shim 1399ad74772ed4beda5f75ba1fa2a2701f330e4feef3c79f526ec0189c70eb61" address="unix:///run/containerd/s/e497778316e14bacfa8bdadbbbb177416e1fc5ed6ae341a3f08dc216e1823e09" namespace=k8s.io protocol=ttrpc version=3 May 27 18:23:08.275863 containerd[1585]: time="2025-05-27T18:23:08.275418925Z" level=info msg="Container 016aa3a3a62cf3fbc8e2f837e66ecf50356537399dec94240e6d369b9a2a607e: CDI devices from CRI Config.CDIDevices: []" May 27 18:23:08.293931 containerd[1585]: time="2025-05-27T18:23:08.293859666Z" level=info msg="CreateContainer within sandbox \"f420eec0ae2d172129d486f4cbf8be6f2136fd0074871c5a10e5f4d79e59a9a0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"016aa3a3a62cf3fbc8e2f837e66ecf50356537399dec94240e6d369b9a2a607e\"" May 27 18:23:08.296100 containerd[1585]: time="2025-05-27T18:23:08.296048279Z" level=info msg="StartContainer for \"016aa3a3a62cf3fbc8e2f837e66ecf50356537399dec94240e6d369b9a2a607e\"" May 27 18:23:08.299125 containerd[1585]: time="2025-05-27T18:23:08.299083202Z" level=info msg="connecting to shim 016aa3a3a62cf3fbc8e2f837e66ecf50356537399dec94240e6d369b9a2a607e" address="unix:///run/containerd/s/59398474465e2c25dbe734b4fd5f057f4e8e68e274fa506f6dea0be7ba4eb27c" protocol=ttrpc version=3 May 27 18:23:08.299911 systemd[1]: Started cri-containerd-1399ad74772ed4beda5f75ba1fa2a2701f330e4feef3c79f526ec0189c70eb61.scope - libcontainer container 1399ad74772ed4beda5f75ba1fa2a2701f330e4feef3c79f526ec0189c70eb61. May 27 18:23:08.330954 systemd[1]: Started cri-containerd-016aa3a3a62cf3fbc8e2f837e66ecf50356537399dec94240e6d369b9a2a607e.scope - libcontainer container 016aa3a3a62cf3fbc8e2f837e66ecf50356537399dec94240e6d369b9a2a607e. May 27 18:23:08.369077 containerd[1585]: time="2025-05-27T18:23:08.368972328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-7nz2w,Uid:2bd5b395-d567-40e0-bc07-187e428630a4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1399ad74772ed4beda5f75ba1fa2a2701f330e4feef3c79f526ec0189c70eb61\"" May 27 18:23:08.373470 containerd[1585]: time="2025-05-27T18:23:08.373263940Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 27 18:23:08.404050 containerd[1585]: time="2025-05-27T18:23:08.403998851Z" level=info msg="StartContainer for \"016aa3a3a62cf3fbc8e2f837e66ecf50356537399dec94240e6d369b9a2a607e\" returns successfully" May 27 18:23:08.664979 kubelet[2794]: I0527 18:23:08.664370 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gs5rq" podStartSLOduration=1.664155272 podStartE2EDuration="1.664155272s" podCreationTimestamp="2025-05-27 18:23:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 18:23:08.659924403 +0000 UTC m=+7.287677508" watchObservedRunningTime="2025-05-27 18:23:08.664155272 +0000 UTC m=+7.291908378" May 27 18:23:10.072517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1646928211.mount: Deactivated successfully. May 27 18:23:10.972989 containerd[1585]: time="2025-05-27T18:23:10.972908737Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:23:10.974329 containerd[1585]: time="2025-05-27T18:23:10.974146839Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 27 18:23:10.975446 containerd[1585]: time="2025-05-27T18:23:10.975409189Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:23:10.978224 containerd[1585]: time="2025-05-27T18:23:10.978189886Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:23:10.979046 containerd[1585]: time="2025-05-27T18:23:10.978990229Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 2.605687542s" May 27 18:23:10.979046 containerd[1585]: time="2025-05-27T18:23:10.979043699Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 27 18:23:10.982970 containerd[1585]: time="2025-05-27T18:23:10.982929802Z" level=info msg="CreateContainer within sandbox \"1399ad74772ed4beda5f75ba1fa2a2701f330e4feef3c79f526ec0189c70eb61\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 27 18:23:10.998861 containerd[1585]: time="2025-05-27T18:23:10.998817965Z" level=info msg="Container 2d78d2df8c6771314108c38cd45ab4dbf3e6fca81eac3ac03c06ed5b0ffb7dc3: CDI devices from CRI Config.CDIDevices: []" May 27 18:23:11.007045 containerd[1585]: time="2025-05-27T18:23:11.006915872Z" level=info msg="CreateContainer within sandbox \"1399ad74772ed4beda5f75ba1fa2a2701f330e4feef3c79f526ec0189c70eb61\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2d78d2df8c6771314108c38cd45ab4dbf3e6fca81eac3ac03c06ed5b0ffb7dc3\"" May 27 18:23:11.008608 containerd[1585]: time="2025-05-27T18:23:11.007931831Z" level=info msg="StartContainer for \"2d78d2df8c6771314108c38cd45ab4dbf3e6fca81eac3ac03c06ed5b0ffb7dc3\"" May 27 18:23:11.009834 containerd[1585]: time="2025-05-27T18:23:11.009767496Z" level=info msg="connecting to shim 2d78d2df8c6771314108c38cd45ab4dbf3e6fca81eac3ac03c06ed5b0ffb7dc3" address="unix:///run/containerd/s/e497778316e14bacfa8bdadbbbb177416e1fc5ed6ae341a3f08dc216e1823e09" protocol=ttrpc version=3 May 27 18:23:11.032998 systemd[1]: Started cri-containerd-2d78d2df8c6771314108c38cd45ab4dbf3e6fca81eac3ac03c06ed5b0ffb7dc3.scope - libcontainer container 2d78d2df8c6771314108c38cd45ab4dbf3e6fca81eac3ac03c06ed5b0ffb7dc3. May 27 18:23:11.068117 containerd[1585]: time="2025-05-27T18:23:11.068079849Z" level=info msg="StartContainer for \"2d78d2df8c6771314108c38cd45ab4dbf3e6fca81eac3ac03c06ed5b0ffb7dc3\" returns successfully" May 27 18:23:17.942386 sudo[1826]: pam_unix(sudo:session): session closed for user root May 27 18:23:18.223059 sshd[1825]: Connection closed by 172.24.4.1 port 57134 May 27 18:23:18.227968 sshd-session[1823]: pam_unix(sshd:session): session closed for user core May 27 18:23:18.240738 systemd[1]: sshd@8-172.24.4.229:22-172.24.4.1:57134.service: Deactivated successfully. May 27 18:23:18.244413 systemd[1]: session-11.scope: Deactivated successfully. May 27 18:23:18.244892 systemd[1]: session-11.scope: Consumed 8.001s CPU time, 224.1M memory peak. May 27 18:23:18.250034 systemd-logind[1498]: Session 11 logged out. Waiting for processes to exit. May 27 18:23:18.253083 systemd-logind[1498]: Removed session 11. May 27 18:23:22.705816 kubelet[2794]: I0527 18:23:22.705408 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7c5755cdcb-7nz2w" podStartSLOduration=13.096695134 podStartE2EDuration="15.705311247s" podCreationTimestamp="2025-05-27 18:23:07 +0000 UTC" firstStartedPulling="2025-05-27 18:23:08.371673707 +0000 UTC m=+6.999426762" lastFinishedPulling="2025-05-27 18:23:10.98028982 +0000 UTC m=+9.608042875" observedRunningTime="2025-05-27 18:23:11.684725013 +0000 UTC m=+10.312478118" watchObservedRunningTime="2025-05-27 18:23:22.705311247 +0000 UTC m=+21.333064312" May 27 18:23:22.719615 kubelet[2794]: W0527 18:23:22.719478 2794 reflector.go:561] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-4344-0-0-3-6dd1c807ec.novalocal" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4344-0-0-3-6dd1c807ec.novalocal' and this object May 27 18:23:22.720414 kubelet[2794]: E0527 18:23:22.720364 2794 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ci-4344-0-0-3-6dd1c807ec.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4344-0-0-3-6dd1c807ec.novalocal' and this object" logger="UnhandledError" May 27 18:23:22.725231 systemd[1]: Created slice kubepods-besteffort-poda1711128_3c39_4d16_a611_432e2772467e.slice - libcontainer container kubepods-besteffort-poda1711128_3c39_4d16_a611_432e2772467e.slice. May 27 18:23:22.757095 kubelet[2794]: I0527 18:23:22.757055 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2cr5\" (UniqueName: \"kubernetes.io/projected/a1711128-3c39-4d16-a611-432e2772467e-kube-api-access-z2cr5\") pod \"calico-typha-b4ff97cd9-46l8k\" (UID: \"a1711128-3c39-4d16-a611-432e2772467e\") " pod="calico-system/calico-typha-b4ff97cd9-46l8k" May 27 18:23:22.757445 kubelet[2794]: I0527 18:23:22.757326 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a1711128-3c39-4d16-a611-432e2772467e-typha-certs\") pod \"calico-typha-b4ff97cd9-46l8k\" (UID: \"a1711128-3c39-4d16-a611-432e2772467e\") " pod="calico-system/calico-typha-b4ff97cd9-46l8k" May 27 18:23:22.757445 kubelet[2794]: I0527 18:23:22.757390 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1711128-3c39-4d16-a611-432e2772467e-tigera-ca-bundle\") pod \"calico-typha-b4ff97cd9-46l8k\" (UID: \"a1711128-3c39-4d16-a611-432e2772467e\") " pod="calico-system/calico-typha-b4ff97cd9-46l8k" May 27 18:23:23.060077 kubelet[2794]: I0527 18:23:23.060018 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e97d7a3a-9e8c-4e6f-b6e0-2e5fd19ae3c4-policysync\") pod \"calico-node-v9nnf\" (UID: \"e97d7a3a-9e8c-4e6f-b6e0-2e5fd19ae3c4\") " pod="calico-system/calico-node-v9nnf" May 27 18:23:23.060077 kubelet[2794]: I0527 18:23:23.060083 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxc7n\" (UniqueName: \"kubernetes.io/projected/e97d7a3a-9e8c-4e6f-b6e0-2e5fd19ae3c4-kube-api-access-hxc7n\") pod \"calico-node-v9nnf\" (UID: \"e97d7a3a-9e8c-4e6f-b6e0-2e5fd19ae3c4\") " pod="calico-system/calico-node-v9nnf" May 27 18:23:23.060332 kubelet[2794]: I0527 18:23:23.060119 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e97d7a3a-9e8c-4e6f-b6e0-2e5fd19ae3c4-node-certs\") pod \"calico-node-v9nnf\" (UID: \"e97d7a3a-9e8c-4e6f-b6e0-2e5fd19ae3c4\") " pod="calico-system/calico-node-v9nnf" May 27 18:23:23.060332 kubelet[2794]: I0527 18:23:23.060144 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e97d7a3a-9e8c-4e6f-b6e0-2e5fd19ae3c4-lib-modules\") pod \"calico-node-v9nnf\" (UID: \"e97d7a3a-9e8c-4e6f-b6e0-2e5fd19ae3c4\") " pod="calico-system/calico-node-v9nnf" May 27 18:23:23.060332 kubelet[2794]: I0527 18:23:23.060195 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e97d7a3a-9e8c-4e6f-b6e0-2e5fd19ae3c4-var-lib-calico\") pod \"calico-node-v9nnf\" (UID: \"e97d7a3a-9e8c-4e6f-b6e0-2e5fd19ae3c4\") " pod="calico-system/calico-node-v9nnf" May 27 18:23:23.060332 kubelet[2794]: I0527 18:23:23.060214 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e97d7a3a-9e8c-4e6f-b6e0-2e5fd19ae3c4-var-run-calico\") pod \"calico-node-v9nnf\" (UID: \"e97d7a3a-9e8c-4e6f-b6e0-2e5fd19ae3c4\") " pod="calico-system/calico-node-v9nnf" May 27 18:23:23.060332 kubelet[2794]: I0527 18:23:23.060233 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e97d7a3a-9e8c-4e6f-b6e0-2e5fd19ae3c4-cni-log-dir\") pod \"calico-node-v9nnf\" (UID: \"e97d7a3a-9e8c-4e6f-b6e0-2e5fd19ae3c4\") " pod="calico-system/calico-node-v9nnf" May 27 18:23:23.060508 kubelet[2794]: I0527 18:23:23.060251 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e97d7a3a-9e8c-4e6f-b6e0-2e5fd19ae3c4-cni-bin-dir\") pod \"calico-node-v9nnf\" (UID: \"e97d7a3a-9e8c-4e6f-b6e0-2e5fd19ae3c4\") " pod="calico-system/calico-node-v9nnf" May 27 18:23:23.060508 kubelet[2794]: I0527 18:23:23.060268 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e97d7a3a-9e8c-4e6f-b6e0-2e5fd19ae3c4-cni-net-dir\") pod \"calico-node-v9nnf\" (UID: \"e97d7a3a-9e8c-4e6f-b6e0-2e5fd19ae3c4\") " pod="calico-system/calico-node-v9nnf" May 27 18:23:23.060508 kubelet[2794]: I0527 18:23:23.060291 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e97d7a3a-9e8c-4e6f-b6e0-2e5fd19ae3c4-flexvol-driver-host\") pod \"calico-node-v9nnf\" (UID: \"e97d7a3a-9e8c-4e6f-b6e0-2e5fd19ae3c4\") " pod="calico-system/calico-node-v9nnf" May 27 18:23:23.060508 kubelet[2794]: I0527 18:23:23.060309 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e97d7a3a-9e8c-4e6f-b6e0-2e5fd19ae3c4-tigera-ca-bundle\") pod \"calico-node-v9nnf\" (UID: \"e97d7a3a-9e8c-4e6f-b6e0-2e5fd19ae3c4\") " pod="calico-system/calico-node-v9nnf" May 27 18:23:23.060508 kubelet[2794]: I0527 18:23:23.060357 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e97d7a3a-9e8c-4e6f-b6e0-2e5fd19ae3c4-xtables-lock\") pod \"calico-node-v9nnf\" (UID: \"e97d7a3a-9e8c-4e6f-b6e0-2e5fd19ae3c4\") " pod="calico-system/calico-node-v9nnf" May 27 18:23:23.070017 systemd[1]: Created slice kubepods-besteffort-pode97d7a3a_9e8c_4e6f_b6e0_2e5fd19ae3c4.slice - libcontainer container kubepods-besteffort-pode97d7a3a_9e8c_4e6f_b6e0_2e5fd19ae3c4.slice. May 27 18:23:23.169711 kubelet[2794]: E0527 18:23:23.168744 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.169711 kubelet[2794]: W0527 18:23:23.168776 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.169711 kubelet[2794]: E0527 18:23:23.168819 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.169711 kubelet[2794]: E0527 18:23:23.168943 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.169711 kubelet[2794]: W0527 18:23:23.168952 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.169711 kubelet[2794]: E0527 18:23:23.168961 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.169711 kubelet[2794]: E0527 18:23:23.169067 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.169711 kubelet[2794]: W0527 18:23:23.169083 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.169711 kubelet[2794]: E0527 18:23:23.169093 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.169711 kubelet[2794]: E0527 18:23:23.169291 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.170164 kubelet[2794]: W0527 18:23:23.169300 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.170164 kubelet[2794]: E0527 18:23:23.169310 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.171890 kubelet[2794]: E0527 18:23:23.171801 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.171890 kubelet[2794]: W0527 18:23:23.171826 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.171890 kubelet[2794]: E0527 18:23:23.171848 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.180131 kubelet[2794]: E0527 18:23:23.180094 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.180131 kubelet[2794]: W0527 18:23:23.180119 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.181879 kubelet[2794]: E0527 18:23:23.180140 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.262056 kubelet[2794]: E0527 18:23:23.261971 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.262056 kubelet[2794]: W0527 18:23:23.261993 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.262056 kubelet[2794]: E0527 18:23:23.262014 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.345865 kubelet[2794]: E0527 18:23:23.344771 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rmtkq" podUID="5f8d96ad-baa3-44d5-afb2-3ebebf965cf1" May 27 18:23:23.359240 kubelet[2794]: E0527 18:23:23.359130 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.359240 kubelet[2794]: W0527 18:23:23.359157 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.359240 kubelet[2794]: E0527 18:23:23.359178 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.359879 kubelet[2794]: E0527 18:23:23.359818 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.359879 kubelet[2794]: W0527 18:23:23.359831 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.359879 kubelet[2794]: E0527 18:23:23.359842 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.360233 kubelet[2794]: E0527 18:23:23.360167 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.360233 kubelet[2794]: W0527 18:23:23.360179 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.360233 kubelet[2794]: E0527 18:23:23.360190 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.362401 kubelet[2794]: E0527 18:23:23.362237 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.362401 kubelet[2794]: W0527 18:23:23.362258 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.362401 kubelet[2794]: E0527 18:23:23.362276 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.362874 kubelet[2794]: E0527 18:23:23.362735 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.362874 kubelet[2794]: W0527 18:23:23.362751 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.362874 kubelet[2794]: E0527 18:23:23.362770 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.363942 kubelet[2794]: E0527 18:23:23.363720 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.363942 kubelet[2794]: W0527 18:23:23.363743 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.363942 kubelet[2794]: E0527 18:23:23.363757 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.364486 kubelet[2794]: E0527 18:23:23.364471 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.364758 kubelet[2794]: W0527 18:23:23.364611 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.364758 kubelet[2794]: E0527 18:23:23.364637 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.365261 kubelet[2794]: E0527 18:23:23.365126 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.365261 kubelet[2794]: W0527 18:23:23.365141 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.365261 kubelet[2794]: E0527 18:23:23.365153 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.366831 kubelet[2794]: E0527 18:23:23.366764 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.366831 kubelet[2794]: W0527 18:23:23.366780 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.366831 kubelet[2794]: E0527 18:23:23.366792 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.367579 kubelet[2794]: E0527 18:23:23.367520 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.367579 kubelet[2794]: W0527 18:23:23.367533 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.367579 kubelet[2794]: E0527 18:23:23.367543 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.367998 kubelet[2794]: E0527 18:23:23.367903 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.367998 kubelet[2794]: W0527 18:23:23.367916 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.367998 kubelet[2794]: E0527 18:23:23.367926 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.368519 kubelet[2794]: E0527 18:23:23.368318 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.368519 kubelet[2794]: W0527 18:23:23.368331 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.368519 kubelet[2794]: E0527 18:23:23.368341 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.368736 kubelet[2794]: E0527 18:23:23.368704 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.368814 kubelet[2794]: W0527 18:23:23.368734 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.368814 kubelet[2794]: E0527 18:23:23.368759 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.369304 kubelet[2794]: E0527 18:23:23.369283 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.369304 kubelet[2794]: W0527 18:23:23.369298 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.369450 kubelet[2794]: E0527 18:23:23.369311 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.369676 kubelet[2794]: E0527 18:23:23.369653 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.369676 kubelet[2794]: W0527 18:23:23.369669 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.370062 kubelet[2794]: E0527 18:23:23.369994 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.370182 kubelet[2794]: E0527 18:23:23.370148 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.370182 kubelet[2794]: W0527 18:23:23.370169 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.370182 kubelet[2794]: E0527 18:23:23.370181 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.370743 kubelet[2794]: E0527 18:23:23.370675 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.370743 kubelet[2794]: W0527 18:23:23.370713 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.370743 kubelet[2794]: E0527 18:23:23.370737 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.371496 kubelet[2794]: E0527 18:23:23.371468 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.371496 kubelet[2794]: W0527 18:23:23.371483 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.371734 kubelet[2794]: E0527 18:23:23.371540 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.371808 kubelet[2794]: E0527 18:23:23.371758 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.371808 kubelet[2794]: W0527 18:23:23.371770 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.371808 kubelet[2794]: E0527 18:23:23.371781 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.372123 kubelet[2794]: E0527 18:23:23.372102 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.372123 kubelet[2794]: W0527 18:23:23.372116 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.372339 kubelet[2794]: E0527 18:23:23.372127 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.372718 kubelet[2794]: E0527 18:23:23.372668 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.372783 kubelet[2794]: W0527 18:23:23.372768 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.372861 kubelet[2794]: E0527 18:23:23.372784 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.372861 kubelet[2794]: I0527 18:23:23.372817 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5f8d96ad-baa3-44d5-afb2-3ebebf965cf1-registration-dir\") pod \"csi-node-driver-rmtkq\" (UID: \"5f8d96ad-baa3-44d5-afb2-3ebebf965cf1\") " pod="calico-system/csi-node-driver-rmtkq" May 27 18:23:23.373211 kubelet[2794]: E0527 18:23:23.373185 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.373211 kubelet[2794]: W0527 18:23:23.373202 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.373408 kubelet[2794]: E0527 18:23:23.373360 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.374058 kubelet[2794]: E0527 18:23:23.373940 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.374058 kubelet[2794]: W0527 18:23:23.373957 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.374058 kubelet[2794]: E0527 18:23:23.373969 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.374058 kubelet[2794]: I0527 18:23:23.373990 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62tb5\" (UniqueName: \"kubernetes.io/projected/5f8d96ad-baa3-44d5-afb2-3ebebf965cf1-kube-api-access-62tb5\") pod \"csi-node-driver-rmtkq\" (UID: \"5f8d96ad-baa3-44d5-afb2-3ebebf965cf1\") " pod="calico-system/csi-node-driver-rmtkq" May 27 18:23:23.374819 kubelet[2794]: E0527 18:23:23.374770 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.375150 containerd[1585]: time="2025-05-27T18:23:23.375033328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v9nnf,Uid:e97d7a3a-9e8c-4e6f-b6e0-2e5fd19ae3c4,Namespace:calico-system,Attempt:0,}" May 27 18:23:23.376727 kubelet[2794]: W0527 18:23:23.376371 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.376727 kubelet[2794]: E0527 18:23:23.376431 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.376727 kubelet[2794]: I0527 18:23:23.376484 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5f8d96ad-baa3-44d5-afb2-3ebebf965cf1-kubelet-dir\") pod \"csi-node-driver-rmtkq\" (UID: \"5f8d96ad-baa3-44d5-afb2-3ebebf965cf1\") " pod="calico-system/csi-node-driver-rmtkq" May 27 18:23:23.376956 kubelet[2794]: E0527 18:23:23.376828 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.376956 kubelet[2794]: W0527 18:23:23.376839 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.376956 kubelet[2794]: E0527 18:23:23.376878 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.377247 kubelet[2794]: E0527 18:23:23.377227 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.377247 kubelet[2794]: W0527 18:23:23.377241 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.377502 kubelet[2794]: E0527 18:23:23.377401 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.377904 kubelet[2794]: E0527 18:23:23.377884 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.377904 kubelet[2794]: W0527 18:23:23.377899 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.378345 kubelet[2794]: E0527 18:23:23.378313 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.378417 kubelet[2794]: I0527 18:23:23.378358 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5f8d96ad-baa3-44d5-afb2-3ebebf965cf1-varrun\") pod \"csi-node-driver-rmtkq\" (UID: \"5f8d96ad-baa3-44d5-afb2-3ebebf965cf1\") " pod="calico-system/csi-node-driver-rmtkq" May 27 18:23:23.379546 kubelet[2794]: E0527 18:23:23.378834 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.379546 kubelet[2794]: W0527 18:23:23.378851 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.379546 kubelet[2794]: E0527 18:23:23.378876 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.379978 kubelet[2794]: E0527 18:23:23.379925 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.379978 kubelet[2794]: W0527 18:23:23.379940 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.379978 kubelet[2794]: E0527 18:23:23.379961 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.380427 kubelet[2794]: E0527 18:23:23.380413 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.380520 kubelet[2794]: W0527 18:23:23.380490 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.380750 kubelet[2794]: E0527 18:23:23.380732 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.381141 kubelet[2794]: E0527 18:23:23.380966 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.381141 kubelet[2794]: W0527 18:23:23.380982 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.381141 kubelet[2794]: E0527 18:23:23.380994 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.381502 kubelet[2794]: E0527 18:23:23.381440 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.381803 kubelet[2794]: W0527 18:23:23.381729 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.381803 kubelet[2794]: E0527 18:23:23.381749 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.382423 kubelet[2794]: E0527 18:23:23.382391 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.382834 kubelet[2794]: W0527 18:23:23.382724 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.382834 kubelet[2794]: E0527 18:23:23.382744 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.383235 kubelet[2794]: E0527 18:23:23.383211 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.383667 kubelet[2794]: W0527 18:23:23.383382 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.383667 kubelet[2794]: E0527 18:23:23.383403 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.383667 kubelet[2794]: I0527 18:23:23.383442 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5f8d96ad-baa3-44d5-afb2-3ebebf965cf1-socket-dir\") pod \"csi-node-driver-rmtkq\" (UID: \"5f8d96ad-baa3-44d5-afb2-3ebebf965cf1\") " pod="calico-system/csi-node-driver-rmtkq" May 27 18:23:23.384477 kubelet[2794]: E0527 18:23:23.384460 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.384573 kubelet[2794]: W0527 18:23:23.384559 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.384712 kubelet[2794]: E0527 18:23:23.384647 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.385009 kubelet[2794]: E0527 18:23:23.384958 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.385009 kubelet[2794]: W0527 18:23:23.384974 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.385009 kubelet[2794]: E0527 18:23:23.384986 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.421276 containerd[1585]: time="2025-05-27T18:23:23.421207804Z" level=info msg="connecting to shim 87b4dbd1d9bc7324f49c6fb17ba520db9816fe166689b6f094da03afb934ce4b" address="unix:///run/containerd/s/b0e8762f40b13f8580e0de75bf00d06af10e6608e44ebd69e12dfdf1de43ca86" namespace=k8s.io protocol=ttrpc version=3 May 27 18:23:23.483161 systemd[1]: Started cri-containerd-87b4dbd1d9bc7324f49c6fb17ba520db9816fe166689b6f094da03afb934ce4b.scope - libcontainer container 87b4dbd1d9bc7324f49c6fb17ba520db9816fe166689b6f094da03afb934ce4b. May 27 18:23:23.484644 kubelet[2794]: E0527 18:23:23.484584 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.485034 kubelet[2794]: W0527 18:23:23.484728 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.485034 kubelet[2794]: E0527 18:23:23.484766 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.485551 kubelet[2794]: E0527 18:23:23.485418 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.485551 kubelet[2794]: W0527 18:23:23.485479 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.485551 kubelet[2794]: E0527 18:23:23.485491 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.486475 kubelet[2794]: E0527 18:23:23.486396 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.486475 kubelet[2794]: W0527 18:23:23.486412 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.486475 kubelet[2794]: E0527 18:23:23.486430 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.486884 kubelet[2794]: E0527 18:23:23.486814 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.486884 kubelet[2794]: W0527 18:23:23.486841 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.486884 kubelet[2794]: E0527 18:23:23.486871 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.487577 kubelet[2794]: E0527 18:23:23.487168 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.487577 kubelet[2794]: W0527 18:23:23.487183 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.487577 kubelet[2794]: E0527 18:23:23.487195 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.487863 kubelet[2794]: E0527 18:23:23.487614 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.487863 kubelet[2794]: W0527 18:23:23.487627 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.487863 kubelet[2794]: E0527 18:23:23.487645 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.488142 kubelet[2794]: E0527 18:23:23.488098 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.488142 kubelet[2794]: W0527 18:23:23.488109 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.488142 kubelet[2794]: E0527 18:23:23.488128 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.488602 kubelet[2794]: E0527 18:23:23.488584 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.488602 kubelet[2794]: W0527 18:23:23.488599 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.488782 kubelet[2794]: E0527 18:23:23.488617 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.489108 kubelet[2794]: E0527 18:23:23.489059 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.489108 kubelet[2794]: W0527 18:23:23.489074 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.489251 kubelet[2794]: E0527 18:23:23.489216 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.489460 kubelet[2794]: E0527 18:23:23.489442 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.489460 kubelet[2794]: W0527 18:23:23.489456 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.489663 kubelet[2794]: E0527 18:23:23.489473 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.489871 kubelet[2794]: E0527 18:23:23.489860 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.489997 kubelet[2794]: W0527 18:23:23.489984 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.490209 kubelet[2794]: E0527 18:23:23.490115 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.490469 kubelet[2794]: E0527 18:23:23.490442 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.490747 kubelet[2794]: W0527 18:23:23.490594 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.490975 kubelet[2794]: E0527 18:23:23.490959 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.491533 kubelet[2794]: E0527 18:23:23.491477 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.491533 kubelet[2794]: W0527 18:23:23.491518 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.491971 kubelet[2794]: E0527 18:23:23.491889 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.492860 kubelet[2794]: E0527 18:23:23.492414 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.492860 kubelet[2794]: W0527 18:23:23.492429 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.492860 kubelet[2794]: E0527 18:23:23.492499 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.493289 kubelet[2794]: E0527 18:23:23.493247 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.493289 kubelet[2794]: W0527 18:23:23.493259 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.493656 kubelet[2794]: E0527 18:23:23.493630 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.493796 kubelet[2794]: E0527 18:23:23.493630 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.494253 kubelet[2794]: W0527 18:23:23.493864 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.494253 kubelet[2794]: E0527 18:23:23.494188 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.494514 kubelet[2794]: E0527 18:23:23.494486 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.494514 kubelet[2794]: W0527 18:23:23.494499 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.494976 kubelet[2794]: E0527 18:23:23.494812 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.495176 kubelet[2794]: E0527 18:23:23.495125 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.495176 kubelet[2794]: W0527 18:23:23.495138 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.495434 kubelet[2794]: E0527 18:23:23.495366 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.495587 kubelet[2794]: E0527 18:23:23.495573 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.495696 kubelet[2794]: W0527 18:23:23.495665 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.495975 kubelet[2794]: E0527 18:23:23.495960 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.496104 kubelet[2794]: E0527 18:23:23.496093 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.496248 kubelet[2794]: W0527 18:23:23.496155 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.496439 kubelet[2794]: E0527 18:23:23.496427 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.496591 kubelet[2794]: W0527 18:23:23.496522 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.496865 kubelet[2794]: E0527 18:23:23.496787 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.496865 kubelet[2794]: W0527 18:23:23.496801 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.496865 kubelet[2794]: E0527 18:23:23.496851 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.497073 kubelet[2794]: E0527 18:23:23.497059 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.497141 kubelet[2794]: W0527 18:23:23.497130 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.497298 kubelet[2794]: E0527 18:23:23.497206 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.497442 kubelet[2794]: E0527 18:23:23.497429 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.497622 kubelet[2794]: W0527 18:23:23.497514 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.497622 kubelet[2794]: E0527 18:23:23.497531 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.497622 kubelet[2794]: E0527 18:23:23.497558 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.497622 kubelet[2794]: E0527 18:23:23.497608 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.498116 kubelet[2794]: E0527 18:23:23.497889 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.498116 kubelet[2794]: W0527 18:23:23.497902 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.498116 kubelet[2794]: E0527 18:23:23.497912 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.498334 kubelet[2794]: E0527 18:23:23.498320 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.498424 kubelet[2794]: W0527 18:23:23.498410 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.498556 kubelet[2794]: E0527 18:23:23.498503 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.514272 kubelet[2794]: E0527 18:23:23.514229 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.514272 kubelet[2794]: W0527 18:23:23.514257 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.515435 kubelet[2794]: E0527 18:23:23.514284 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.587873 containerd[1585]: time="2025-05-27T18:23:23.587784522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v9nnf,Uid:e97d7a3a-9e8c-4e6f-b6e0-2e5fd19ae3c4,Namespace:calico-system,Attempt:0,} returns sandbox id \"87b4dbd1d9bc7324f49c6fb17ba520db9816fe166689b6f094da03afb934ce4b\"" May 27 18:23:23.592173 containerd[1585]: time="2025-05-27T18:23:23.592001302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 27 18:23:23.593431 kubelet[2794]: E0527 18:23:23.593394 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.593431 kubelet[2794]: W0527 18:23:23.593421 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.593559 kubelet[2794]: E0527 18:23:23.593445 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.694889 kubelet[2794]: E0527 18:23:23.694418 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.694889 kubelet[2794]: W0527 18:23:23.694438 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.694889 kubelet[2794]: E0527 18:23:23.694459 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.797382 kubelet[2794]: E0527 18:23:23.797146 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.797382 kubelet[2794]: W0527 18:23:23.797202 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.797382 kubelet[2794]: E0527 18:23:23.797244 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:23.860242 kubelet[2794]: E0527 18:23:23.859839 2794 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition May 27 18:23:23.860242 kubelet[2794]: E0527 18:23:23.860175 2794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1711128-3c39-4d16-a611-432e2772467e-typha-certs podName:a1711128-3c39-4d16-a611-432e2772467e nodeName:}" failed. No retries permitted until 2025-05-27 18:23:24.360051251 +0000 UTC m=+22.987804356 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/a1711128-3c39-4d16-a611-432e2772467e-typha-certs") pod "calico-typha-b4ff97cd9-46l8k" (UID: "a1711128-3c39-4d16-a611-432e2772467e") : failed to sync secret cache: timed out waiting for the condition May 27 18:23:23.899633 kubelet[2794]: E0527 18:23:23.899305 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:23.899633 kubelet[2794]: W0527 18:23:23.899349 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:23.899633 kubelet[2794]: E0527 18:23:23.899396 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:24.002312 kubelet[2794]: E0527 18:23:24.002088 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:24.002312 kubelet[2794]: W0527 18:23:24.002254 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:24.003487 kubelet[2794]: E0527 18:23:24.002523 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:24.104607 kubelet[2794]: E0527 18:23:24.104483 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:24.104760 kubelet[2794]: W0527 18:23:24.104598 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:24.104792 kubelet[2794]: E0527 18:23:24.104678 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:24.207919 kubelet[2794]: E0527 18:23:24.207630 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:24.207919 kubelet[2794]: W0527 18:23:24.207805 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:24.207919 kubelet[2794]: E0527 18:23:24.207942 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:24.312046 kubelet[2794]: E0527 18:23:24.311630 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:24.313801 kubelet[2794]: W0527 18:23:24.311932 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:24.314261 kubelet[2794]: E0527 18:23:24.313991 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:24.417771 kubelet[2794]: E0527 18:23:24.417544 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:24.417771 kubelet[2794]: W0527 18:23:24.417732 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:24.418210 kubelet[2794]: E0527 18:23:24.417894 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:24.420049 kubelet[2794]: E0527 18:23:24.419960 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:24.420049 kubelet[2794]: W0527 18:23:24.420010 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:24.420376 kubelet[2794]: E0527 18:23:24.420061 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:24.421743 kubelet[2794]: E0527 18:23:24.420854 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:24.421743 kubelet[2794]: W0527 18:23:24.420944 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:24.421743 kubelet[2794]: E0527 18:23:24.420971 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:24.421743 kubelet[2794]: E0527 18:23:24.421515 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:24.421743 kubelet[2794]: W0527 18:23:24.421539 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:24.421743 kubelet[2794]: E0527 18:23:24.421604 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:24.422412 kubelet[2794]: E0527 18:23:24.422353 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:24.422552 kubelet[2794]: W0527 18:23:24.422438 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:24.422552 kubelet[2794]: E0527 18:23:24.422468 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:24.444035 kubelet[2794]: E0527 18:23:24.443833 2794 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:23:24.444035 kubelet[2794]: W0527 18:23:24.443897 2794 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:23:24.444035 kubelet[2794]: E0527 18:23:24.443936 2794 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:23:24.534446 kubelet[2794]: E0527 18:23:24.534372 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rmtkq" podUID="5f8d96ad-baa3-44d5-afb2-3ebebf965cf1" May 27 18:23:24.537671 containerd[1585]: time="2025-05-27T18:23:24.537593023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b4ff97cd9-46l8k,Uid:a1711128-3c39-4d16-a611-432e2772467e,Namespace:calico-system,Attempt:0,}" May 27 18:23:24.791996 containerd[1585]: time="2025-05-27T18:23:24.791658804Z" level=info msg="connecting to shim 955024f3ddb17865c6d4d650867b81656b4c9d7e534fb77169656d6e73c1a85e" address="unix:///run/containerd/s/a548773e1940da09b3a59b47300ee82e4c7da1c7bbc2ef332ac0ba50ade37e10" namespace=k8s.io protocol=ttrpc version=3 May 27 18:23:24.834930 systemd[1]: Started cri-containerd-955024f3ddb17865c6d4d650867b81656b4c9d7e534fb77169656d6e73c1a85e.scope - libcontainer container 955024f3ddb17865c6d4d650867b81656b4c9d7e534fb77169656d6e73c1a85e. May 27 18:23:24.905632 containerd[1585]: time="2025-05-27T18:23:24.905565453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b4ff97cd9-46l8k,Uid:a1711128-3c39-4d16-a611-432e2772467e,Namespace:calico-system,Attempt:0,} returns sandbox id \"955024f3ddb17865c6d4d650867b81656b4c9d7e534fb77169656d6e73c1a85e\"" May 27 18:23:25.669389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount472133105.mount: Deactivated successfully. May 27 18:23:25.872645 containerd[1585]: time="2025-05-27T18:23:25.872589244Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:23:25.874865 containerd[1585]: time="2025-05-27T18:23:25.874801084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=5934460" May 27 18:23:25.876474 containerd[1585]: time="2025-05-27T18:23:25.876408346Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:23:25.885371 containerd[1585]: time="2025-05-27T18:23:25.885309863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:23:25.886586 containerd[1585]: time="2025-05-27T18:23:25.886545887Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 2.29447258s" May 27 18:23:25.887239 containerd[1585]: time="2025-05-27T18:23:25.887108826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 27 18:23:25.889277 containerd[1585]: time="2025-05-27T18:23:25.889237086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 27 18:23:25.892166 containerd[1585]: time="2025-05-27T18:23:25.891970276Z" level=info msg="CreateContainer within sandbox \"87b4dbd1d9bc7324f49c6fb17ba520db9816fe166689b6f094da03afb934ce4b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 27 18:23:25.910445 containerd[1585]: time="2025-05-27T18:23:25.910402632Z" level=info msg="Container f8656a772b1d841043c162ed265b40ef4d6d604c4e1d19b2ab5cc4caaf732075: CDI devices from CRI Config.CDIDevices: []" May 27 18:23:25.916305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1638201374.mount: Deactivated successfully. May 27 18:23:25.933569 containerd[1585]: time="2025-05-27T18:23:25.933191263Z" level=info msg="CreateContainer within sandbox \"87b4dbd1d9bc7324f49c6fb17ba520db9816fe166689b6f094da03afb934ce4b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f8656a772b1d841043c162ed265b40ef4d6d604c4e1d19b2ab5cc4caaf732075\"" May 27 18:23:25.935099 containerd[1585]: time="2025-05-27T18:23:25.935049364Z" level=info msg="StartContainer for \"f8656a772b1d841043c162ed265b40ef4d6d604c4e1d19b2ab5cc4caaf732075\"" May 27 18:23:25.939781 containerd[1585]: time="2025-05-27T18:23:25.939231235Z" level=info msg="connecting to shim f8656a772b1d841043c162ed265b40ef4d6d604c4e1d19b2ab5cc4caaf732075" address="unix:///run/containerd/s/b0e8762f40b13f8580e0de75bf00d06af10e6608e44ebd69e12dfdf1de43ca86" protocol=ttrpc version=3 May 27 18:23:25.977909 systemd[1]: Started cri-containerd-f8656a772b1d841043c162ed265b40ef4d6d604c4e1d19b2ab5cc4caaf732075.scope - libcontainer container f8656a772b1d841043c162ed265b40ef4d6d604c4e1d19b2ab5cc4caaf732075. May 27 18:23:26.031933 containerd[1585]: time="2025-05-27T18:23:26.031831765Z" level=info msg="StartContainer for \"f8656a772b1d841043c162ed265b40ef4d6d604c4e1d19b2ab5cc4caaf732075\" returns successfully" May 27 18:23:26.041477 systemd[1]: cri-containerd-f8656a772b1d841043c162ed265b40ef4d6d604c4e1d19b2ab5cc4caaf732075.scope: Deactivated successfully. May 27 18:23:26.049946 containerd[1585]: time="2025-05-27T18:23:26.049593011Z" level=info msg="received exit event container_id:\"f8656a772b1d841043c162ed265b40ef4d6d604c4e1d19b2ab5cc4caaf732075\" id:\"f8656a772b1d841043c162ed265b40ef4d6d604c4e1d19b2ab5cc4caaf732075\" pid:3401 exited_at:{seconds:1748370206 nanos:48564002}" May 27 18:23:26.050387 containerd[1585]: time="2025-05-27T18:23:26.049648289Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8656a772b1d841043c162ed265b40ef4d6d604c4e1d19b2ab5cc4caaf732075\" id:\"f8656a772b1d841043c162ed265b40ef4d6d604c4e1d19b2ab5cc4caaf732075\" pid:3401 exited_at:{seconds:1748370206 nanos:48564002}" May 27 18:23:26.534331 kubelet[2794]: E0527 18:23:26.534147 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rmtkq" podUID="5f8d96ad-baa3-44d5-afb2-3ebebf965cf1" May 27 18:23:26.579125 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8656a772b1d841043c162ed265b40ef4d6d604c4e1d19b2ab5cc4caaf732075-rootfs.mount: Deactivated successfully. May 27 18:23:28.535462 kubelet[2794]: E0527 18:23:28.534929 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rmtkq" podUID="5f8d96ad-baa3-44d5-afb2-3ebebf965cf1" May 27 18:23:29.711982 containerd[1585]: time="2025-05-27T18:23:29.711849988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:23:29.713501 containerd[1585]: time="2025-05-27T18:23:29.713296044Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=33665828" May 27 18:23:29.714957 containerd[1585]: time="2025-05-27T18:23:29.714918653Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:23:29.718533 containerd[1585]: time="2025-05-27T18:23:29.718484575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:23:29.719700 containerd[1585]: time="2025-05-27T18:23:29.719637976Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 3.830177229s" May 27 18:23:29.719812 containerd[1585]: time="2025-05-27T18:23:29.719793884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 27 18:23:29.721907 containerd[1585]: time="2025-05-27T18:23:29.721826967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 27 18:23:29.746374 containerd[1585]: time="2025-05-27T18:23:29.746288614Z" level=info msg="CreateContainer within sandbox \"955024f3ddb17865c6d4d650867b81656b4c9d7e534fb77169656d6e73c1a85e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 27 18:23:29.762262 containerd[1585]: time="2025-05-27T18:23:29.761741289Z" level=info msg="Container f8ff9000ea20ca0de1f15eecce70409e5b7bb8bb9ace63d3a67f4d8d3ab6b11d: CDI devices from CRI Config.CDIDevices: []" May 27 18:23:29.775135 containerd[1585]: time="2025-05-27T18:23:29.775087107Z" level=info msg="CreateContainer within sandbox \"955024f3ddb17865c6d4d650867b81656b4c9d7e534fb77169656d6e73c1a85e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f8ff9000ea20ca0de1f15eecce70409e5b7bb8bb9ace63d3a67f4d8d3ab6b11d\"" May 27 18:23:29.777667 containerd[1585]: time="2025-05-27T18:23:29.777634703Z" level=info msg="StartContainer for \"f8ff9000ea20ca0de1f15eecce70409e5b7bb8bb9ace63d3a67f4d8d3ab6b11d\"" May 27 18:23:29.779021 containerd[1585]: time="2025-05-27T18:23:29.778966720Z" level=info msg="connecting to shim f8ff9000ea20ca0de1f15eecce70409e5b7bb8bb9ace63d3a67f4d8d3ab6b11d" address="unix:///run/containerd/s/a548773e1940da09b3a59b47300ee82e4c7da1c7bbc2ef332ac0ba50ade37e10" protocol=ttrpc version=3 May 27 18:23:29.807890 systemd[1]: Started cri-containerd-f8ff9000ea20ca0de1f15eecce70409e5b7bb8bb9ace63d3a67f4d8d3ab6b11d.scope - libcontainer container f8ff9000ea20ca0de1f15eecce70409e5b7bb8bb9ace63d3a67f4d8d3ab6b11d. May 27 18:23:29.889061 containerd[1585]: time="2025-05-27T18:23:29.889016204Z" level=info msg="StartContainer for \"f8ff9000ea20ca0de1f15eecce70409e5b7bb8bb9ace63d3a67f4d8d3ab6b11d\" returns successfully" May 27 18:23:30.536524 kubelet[2794]: E0527 18:23:30.536238 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rmtkq" podUID="5f8d96ad-baa3-44d5-afb2-3ebebf965cf1" May 27 18:23:31.766203 kubelet[2794]: I0527 18:23:31.766132 2794 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 18:23:32.534884 kubelet[2794]: E0527 18:23:32.534824 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rmtkq" podUID="5f8d96ad-baa3-44d5-afb2-3ebebf965cf1" May 27 18:23:34.536444 kubelet[2794]: E0527 18:23:34.535477 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rmtkq" podUID="5f8d96ad-baa3-44d5-afb2-3ebebf965cf1" May 27 18:23:34.974881 containerd[1585]: time="2025-05-27T18:23:34.974271006Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:23:34.977351 containerd[1585]: time="2025-05-27T18:23:34.977286805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 27 18:23:34.978865 containerd[1585]: time="2025-05-27T18:23:34.978833514Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:23:34.982903 containerd[1585]: time="2025-05-27T18:23:34.982872358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:23:34.984053 containerd[1585]: time="2025-05-27T18:23:34.983812764Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 5.261762528s" May 27 18:23:34.984053 containerd[1585]: time="2025-05-27T18:23:34.983885504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 27 18:23:34.989546 containerd[1585]: time="2025-05-27T18:23:34.989496061Z" level=info msg="CreateContainer within sandbox \"87b4dbd1d9bc7324f49c6fb17ba520db9816fe166689b6f094da03afb934ce4b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 27 18:23:35.009707 containerd[1585]: time="2025-05-27T18:23:35.008064716Z" level=info msg="Container 2de708894997a9c926dfbe1773ed58663ad7420777914339b03be45e437f8d1c: CDI devices from CRI Config.CDIDevices: []" May 27 18:23:35.014283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2057339444.mount: Deactivated successfully. May 27 18:23:35.033145 containerd[1585]: time="2025-05-27T18:23:35.033079464Z" level=info msg="CreateContainer within sandbox \"87b4dbd1d9bc7324f49c6fb17ba520db9816fe166689b6f094da03afb934ce4b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2de708894997a9c926dfbe1773ed58663ad7420777914339b03be45e437f8d1c\"" May 27 18:23:35.035184 containerd[1585]: time="2025-05-27T18:23:35.034158078Z" level=info msg="StartContainer for \"2de708894997a9c926dfbe1773ed58663ad7420777914339b03be45e437f8d1c\"" May 27 18:23:35.036781 containerd[1585]: time="2025-05-27T18:23:35.036741757Z" level=info msg="connecting to shim 2de708894997a9c926dfbe1773ed58663ad7420777914339b03be45e437f8d1c" address="unix:///run/containerd/s/b0e8762f40b13f8580e0de75bf00d06af10e6608e44ebd69e12dfdf1de43ca86" protocol=ttrpc version=3 May 27 18:23:35.072861 systemd[1]: Started cri-containerd-2de708894997a9c926dfbe1773ed58663ad7420777914339b03be45e437f8d1c.scope - libcontainer container 2de708894997a9c926dfbe1773ed58663ad7420777914339b03be45e437f8d1c. May 27 18:23:35.126413 containerd[1585]: time="2025-05-27T18:23:35.126299484Z" level=info msg="StartContainer for \"2de708894997a9c926dfbe1773ed58663ad7420777914339b03be45e437f8d1c\" returns successfully" May 27 18:23:35.151733 kubelet[2794]: I0527 18:23:35.150080 2794 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 18:23:35.180796 kubelet[2794]: I0527 18:23:35.180415 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-b4ff97cd9-46l8k" podStartSLOduration=8.366763056 podStartE2EDuration="13.180325194s" podCreationTimestamp="2025-05-27 18:23:22 +0000 UTC" firstStartedPulling="2025-05-27 18:23:24.90763845 +0000 UTC m=+23.535391505" lastFinishedPulling="2025-05-27 18:23:29.721200578 +0000 UTC m=+28.348953643" observedRunningTime="2025-05-27 18:23:30.807432228 +0000 UTC m=+29.435185333" watchObservedRunningTime="2025-05-27 18:23:35.180325194 +0000 UTC m=+33.808078249" May 27 18:23:36.535227 kubelet[2794]: E0527 18:23:36.535056 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rmtkq" podUID="5f8d96ad-baa3-44d5-afb2-3ebebf965cf1" May 27 18:23:36.823355 systemd[1]: cri-containerd-2de708894997a9c926dfbe1773ed58663ad7420777914339b03be45e437f8d1c.scope: Deactivated successfully. May 27 18:23:36.824426 systemd[1]: cri-containerd-2de708894997a9c926dfbe1773ed58663ad7420777914339b03be45e437f8d1c.scope: Consumed 1.053s CPU time, 191.3M memory peak, 170.9M written to disk. May 27 18:23:36.827676 containerd[1585]: time="2025-05-27T18:23:36.827443423Z" level=info msg="received exit event container_id:\"2de708894997a9c926dfbe1773ed58663ad7420777914339b03be45e437f8d1c\" id:\"2de708894997a9c926dfbe1773ed58663ad7420777914339b03be45e437f8d1c\" pid:3501 exited_at:{seconds:1748370216 nanos:826935619}" May 27 18:23:36.830046 containerd[1585]: time="2025-05-27T18:23:36.829941611Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2de708894997a9c926dfbe1773ed58663ad7420777914339b03be45e437f8d1c\" id:\"2de708894997a9c926dfbe1773ed58663ad7420777914339b03be45e437f8d1c\" pid:3501 exited_at:{seconds:1748370216 nanos:826935619}" May 27 18:23:36.869371 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2de708894997a9c926dfbe1773ed58663ad7420777914339b03be45e437f8d1c-rootfs.mount: Deactivated successfully. May 27 18:23:36.922262 kubelet[2794]: I0527 18:23:36.922199 2794 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 27 18:23:37.424661 systemd[1]: Created slice kubepods-burstable-pod9ce9100c_285e_419f_a320_34fbd743d450.slice - libcontainer container kubepods-burstable-pod9ce9100c_285e_419f_a320_34fbd743d450.slice. May 27 18:23:37.437403 kubelet[2794]: I0527 18:23:37.433373 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9q5c\" (UniqueName: \"kubernetes.io/projected/9ce9100c-285e-419f-a320-34fbd743d450-kube-api-access-f9q5c\") pod \"coredns-7c65d6cfc9-vbq8l\" (UID: \"9ce9100c-285e-419f-a320-34fbd743d450\") " pod="kube-system/coredns-7c65d6cfc9-vbq8l" May 27 18:23:37.437403 kubelet[2794]: I0527 18:23:37.433567 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ce9100c-285e-419f-a320-34fbd743d450-config-volume\") pod \"coredns-7c65d6cfc9-vbq8l\" (UID: \"9ce9100c-285e-419f-a320-34fbd743d450\") " pod="kube-system/coredns-7c65d6cfc9-vbq8l" May 27 18:23:37.493234 systemd[1]: Created slice kubepods-besteffort-podbc6463ca_f58d_434c_b3ea_ef33c2ab1e13.slice - libcontainer container kubepods-besteffort-podbc6463ca_f58d_434c_b3ea_ef33c2ab1e13.slice. May 27 18:23:37.502745 systemd[1]: Created slice kubepods-besteffort-pod816977dc_b340_4030_b8c8_4987b44f32c4.slice - libcontainer container kubepods-besteffort-pod816977dc_b340_4030_b8c8_4987b44f32c4.slice. May 27 18:23:37.536115 kubelet[2794]: W0527 18:23:37.504881 2794 reflector.go:561] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ci-4344-0-0-3-6dd1c807ec.novalocal" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4344-0-0-3-6dd1c807ec.novalocal' and this object May 27 18:23:37.536115 kubelet[2794]: E0527 18:23:37.505644 2794 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ci-4344-0-0-3-6dd1c807ec.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4344-0-0-3-6dd1c807ec.novalocal' and this object" logger="UnhandledError" May 27 18:23:37.513738 systemd[1]: Created slice kubepods-burstable-podf313ef21_ca0c_4dfd_bf19_451add114318.slice - libcontainer container kubepods-burstable-podf313ef21_ca0c_4dfd_bf19_451add114318.slice. May 27 18:23:37.524594 systemd[1]: Created slice kubepods-besteffort-podbbd4fd77_1837_421d_a4cd_17332246cc0a.slice - libcontainer container kubepods-besteffort-podbbd4fd77_1837_421d_a4cd_17332246cc0a.slice. May 27 18:23:37.533015 systemd[1]: Created slice kubepods-besteffort-podd213ce18_dfff_40dc_8a57_832e1856060d.slice - libcontainer container kubepods-besteffort-podd213ce18_dfff_40dc_8a57_832e1856060d.slice. May 27 18:23:37.539624 kubelet[2794]: I0527 18:23:37.539554 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwdjd\" (UniqueName: \"kubernetes.io/projected/d213ce18-dfff-40dc-8a57-832e1856060d-kube-api-access-nwdjd\") pod \"whisker-b774fd5f7-sdqww\" (UID: \"d213ce18-dfff-40dc-8a57-832e1856060d\") " pod="calico-system/whisker-b774fd5f7-sdqww" May 27 18:23:37.539624 kubelet[2794]: I0527 18:23:37.539605 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e2bb785-ea28-432f-9ef8-e65bb1551704-tigera-ca-bundle\") pod \"calico-kube-controllers-59576795c9-hd85v\" (UID: \"3e2bb785-ea28-432f-9ef8-e65bb1551704\") " pod="calico-system/calico-kube-controllers-59576795c9-hd85v" May 27 18:23:37.539785 kubelet[2794]: I0527 18:23:37.539633 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/816977dc-b340-4030-b8c8-4987b44f32c4-calico-apiserver-certs\") pod \"calico-apiserver-6dfc469868-jbh4w\" (UID: \"816977dc-b340-4030-b8c8-4987b44f32c4\") " pod="calico-apiserver/calico-apiserver-6dfc469868-jbh4w" May 27 18:23:37.539785 kubelet[2794]: I0527 18:23:37.539698 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/bbd4fd77-1837-421d-a4cd-17332246cc0a-goldmane-key-pair\") pod \"goldmane-8f77d7b6c-pdb49\" (UID: \"bbd4fd77-1837-421d-a4cd-17332246cc0a\") " pod="calico-system/goldmane-8f77d7b6c-pdb49" May 27 18:23:37.539785 kubelet[2794]: I0527 18:23:37.539775 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp7p5\" (UniqueName: \"kubernetes.io/projected/3e2bb785-ea28-432f-9ef8-e65bb1551704-kube-api-access-qp7p5\") pod \"calico-kube-controllers-59576795c9-hd85v\" (UID: \"3e2bb785-ea28-432f-9ef8-e65bb1551704\") " pod="calico-system/calico-kube-controllers-59576795c9-hd85v" May 27 18:23:37.539892 kubelet[2794]: I0527 18:23:37.539800 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d213ce18-dfff-40dc-8a57-832e1856060d-whisker-ca-bundle\") pod \"whisker-b774fd5f7-sdqww\" (UID: \"d213ce18-dfff-40dc-8a57-832e1856060d\") " pod="calico-system/whisker-b774fd5f7-sdqww" May 27 18:23:37.539892 kubelet[2794]: I0527 18:23:37.539822 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w2rg\" (UniqueName: \"kubernetes.io/projected/f313ef21-ca0c-4dfd-bf19-451add114318-kube-api-access-6w2rg\") pod \"coredns-7c65d6cfc9-rg64c\" (UID: \"f313ef21-ca0c-4dfd-bf19-451add114318\") " pod="kube-system/coredns-7c65d6cfc9-rg64c" May 27 18:23:37.539892 kubelet[2794]: I0527 18:23:37.539876 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbd4fd77-1837-421d-a4cd-17332246cc0a-config\") pod \"goldmane-8f77d7b6c-pdb49\" (UID: \"bbd4fd77-1837-421d-a4cd-17332246cc0a\") " pod="calico-system/goldmane-8f77d7b6c-pdb49" May 27 18:23:37.540010 kubelet[2794]: I0527 18:23:37.539920 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r877f\" (UniqueName: \"kubernetes.io/projected/816977dc-b340-4030-b8c8-4987b44f32c4-kube-api-access-r877f\") pod \"calico-apiserver-6dfc469868-jbh4w\" (UID: \"816977dc-b340-4030-b8c8-4987b44f32c4\") " pod="calico-apiserver/calico-apiserver-6dfc469868-jbh4w" May 27 18:23:37.540010 kubelet[2794]: I0527 18:23:37.539940 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbd4fd77-1837-421d-a4cd-17332246cc0a-goldmane-ca-bundle\") pod \"goldmane-8f77d7b6c-pdb49\" (UID: \"bbd4fd77-1837-421d-a4cd-17332246cc0a\") " pod="calico-system/goldmane-8f77d7b6c-pdb49" May 27 18:23:37.540010 kubelet[2794]: I0527 18:23:37.539961 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f313ef21-ca0c-4dfd-bf19-451add114318-config-volume\") pod \"coredns-7c65d6cfc9-rg64c\" (UID: \"f313ef21-ca0c-4dfd-bf19-451add114318\") " pod="kube-system/coredns-7c65d6cfc9-rg64c" May 27 18:23:37.540010 kubelet[2794]: I0527 18:23:37.539991 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bc6463ca-f58d-434c-b3ea-ef33c2ab1e13-calico-apiserver-certs\") pod \"calico-apiserver-6dfc469868-vsvgq\" (UID: \"bc6463ca-f58d-434c-b3ea-ef33c2ab1e13\") " pod="calico-apiserver/calico-apiserver-6dfc469868-vsvgq" May 27 18:23:37.540164 kubelet[2794]: I0527 18:23:37.540010 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nztbb\" (UniqueName: \"kubernetes.io/projected/bc6463ca-f58d-434c-b3ea-ef33c2ab1e13-kube-api-access-nztbb\") pod \"calico-apiserver-6dfc469868-vsvgq\" (UID: \"bc6463ca-f58d-434c-b3ea-ef33c2ab1e13\") " pod="calico-apiserver/calico-apiserver-6dfc469868-vsvgq" May 27 18:23:37.540164 kubelet[2794]: I0527 18:23:37.540030 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k62cm\" (UniqueName: \"kubernetes.io/projected/bbd4fd77-1837-421d-a4cd-17332246cc0a-kube-api-access-k62cm\") pod \"goldmane-8f77d7b6c-pdb49\" (UID: \"bbd4fd77-1837-421d-a4cd-17332246cc0a\") " pod="calico-system/goldmane-8f77d7b6c-pdb49" May 27 18:23:37.540164 kubelet[2794]: I0527 18:23:37.540082 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d213ce18-dfff-40dc-8a57-832e1856060d-whisker-backend-key-pair\") pod \"whisker-b774fd5f7-sdqww\" (UID: \"d213ce18-dfff-40dc-8a57-832e1856060d\") " pod="calico-system/whisker-b774fd5f7-sdqww" May 27 18:23:37.550944 systemd[1]: Created slice kubepods-besteffort-pod3e2bb785_ea28_432f_9ef8_e65bb1551704.slice - libcontainer container kubepods-besteffort-pod3e2bb785_ea28_432f_9ef8_e65bb1551704.slice. May 27 18:23:37.814918 containerd[1585]: time="2025-05-27T18:23:37.814337540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vbq8l,Uid:9ce9100c-285e-419f-a320-34fbd743d450,Namespace:kube-system,Attempt:0,}" May 27 18:23:38.016065 containerd[1585]: time="2025-05-27T18:23:38.015988595Z" level=error msg="Failed to destroy network for sandbox \"0a1707b71a5650c7ecb89ab926b4f248e91cb6ea6bc3c777f4e24efe5ab61b58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:23:38.020146 containerd[1585]: time="2025-05-27T18:23:38.020076649Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vbq8l,Uid:9ce9100c-285e-419f-a320-34fbd743d450,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a1707b71a5650c7ecb89ab926b4f248e91cb6ea6bc3c777f4e24efe5ab61b58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:23:38.021601 kubelet[2794]: E0527 18:23:38.020784 2794 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a1707b71a5650c7ecb89ab926b4f248e91cb6ea6bc3c777f4e24efe5ab61b58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:23:38.021892 kubelet[2794]: E0527 18:23:38.021830 2794 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a1707b71a5650c7ecb89ab926b4f248e91cb6ea6bc3c777f4e24efe5ab61b58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-vbq8l" May 27 18:23:38.022023 kubelet[2794]: E0527 18:23:38.021989 2794 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a1707b71a5650c7ecb89ab926b4f248e91cb6ea6bc3c777f4e24efe5ab61b58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-vbq8l" May 27 18:23:38.022439 kubelet[2794]: E0527 18:23:38.022203 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-vbq8l_kube-system(9ce9100c-285e-419f-a320-34fbd743d450)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-vbq8l_kube-system(9ce9100c-285e-419f-a320-34fbd743d450)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a1707b71a5650c7ecb89ab926b4f248e91cb6ea6bc3c777f4e24efe5ab61b58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-vbq8l" podUID="9ce9100c-285e-419f-a320-34fbd743d450" May 27 18:23:38.137621 containerd[1585]: time="2025-05-27T18:23:38.137398202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dfc469868-jbh4w,Uid:816977dc-b340-4030-b8c8-4987b44f32c4,Namespace:calico-apiserver,Attempt:0,}" May 27 18:23:38.139102 containerd[1585]: time="2025-05-27T18:23:38.138210624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rg64c,Uid:f313ef21-ca0c-4dfd-bf19-451add114318,Namespace:kube-system,Attempt:0,}" May 27 18:23:38.140240 containerd[1585]: time="2025-05-27T18:23:38.139129905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-pdb49,Uid:bbd4fd77-1837-421d-a4cd-17332246cc0a,Namespace:calico-system,Attempt:0,}" May 27 18:23:38.141547 containerd[1585]: time="2025-05-27T18:23:38.141487748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dfc469868-vsvgq,Uid:bc6463ca-f58d-434c-b3ea-ef33c2ab1e13,Namespace:calico-apiserver,Attempt:0,}" May 27 18:23:38.158355 containerd[1585]: time="2025-05-27T18:23:38.158273198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59576795c9-hd85v,Uid:3e2bb785-ea28-432f-9ef8-e65bb1551704,Namespace:calico-system,Attempt:0,}" May 27 18:23:38.334668 containerd[1585]: time="2025-05-27T18:23:38.334528907Z" level=error msg="Failed to destroy network for sandbox \"9ce98a3f8217e8eda7b4122e163798068949c101c3414a8529b4c3c52401c3ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:23:38.337287 containerd[1585]: time="2025-05-27T18:23:38.337150971Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dfc469868-vsvgq,Uid:bc6463ca-f58d-434c-b3ea-ef33c2ab1e13,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ce98a3f8217e8eda7b4122e163798068949c101c3414a8529b4c3c52401c3ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:23:38.337867 kubelet[2794]: E0527 18:23:38.337812 2794 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ce98a3f8217e8eda7b4122e163798068949c101c3414a8529b4c3c52401c3ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:23:38.337955 kubelet[2794]: E0527 18:23:38.337898 2794 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ce98a3f8217e8eda7b4122e163798068949c101c3414a8529b4c3c52401c3ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dfc469868-vsvgq" May 27 18:23:38.337955 kubelet[2794]: E0527 18:23:38.337922 2794 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ce98a3f8217e8eda7b4122e163798068949c101c3414a8529b4c3c52401c3ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dfc469868-vsvgq" May 27 18:23:38.338157 kubelet[2794]: E0527 18:23:38.337986 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dfc469868-vsvgq_calico-apiserver(bc6463ca-f58d-434c-b3ea-ef33c2ab1e13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dfc469868-vsvgq_calico-apiserver(bc6463ca-f58d-434c-b3ea-ef33c2ab1e13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ce98a3f8217e8eda7b4122e163798068949c101c3414a8529b4c3c52401c3ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dfc469868-vsvgq" podUID="bc6463ca-f58d-434c-b3ea-ef33c2ab1e13" May 27 18:23:38.369724 containerd[1585]: time="2025-05-27T18:23:38.369620657Z" level=error msg="Failed to destroy network for sandbox \"be144a1264ab7024c142b3cd52c929f411581b4741d6762205cfa8bc55a8e859\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:23:38.384512 containerd[1585]: time="2025-05-27T18:23:38.374143840Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-pdb49,Uid:bbd4fd77-1837-421d-a4cd-17332246cc0a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"be144a1264ab7024c142b3cd52c929f411581b4741d6762205cfa8bc55a8e859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:23:38.384751 kubelet[2794]: E0527 18:23:38.383646 2794 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be144a1264ab7024c142b3cd52c929f411581b4741d6762205cfa8bc55a8e859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:23:38.384751 kubelet[2794]: E0527 18:23:38.383859 2794 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be144a1264ab7024c142b3cd52c929f411581b4741d6762205cfa8bc55a8e859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-pdb49" May 27 18:23:38.386159 kubelet[2794]: E0527 18:23:38.384949 2794 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be144a1264ab7024c142b3cd52c929f411581b4741d6762205cfa8bc55a8e859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-pdb49" May 27 18:23:38.386159 kubelet[2794]: E0527 18:23:38.385745 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-8f77d7b6c-pdb49_calico-system(bbd4fd77-1837-421d-a4cd-17332246cc0a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-8f77d7b6c-pdb49_calico-system(bbd4fd77-1837-421d-a4cd-17332246cc0a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be144a1264ab7024c142b3cd52c929f411581b4741d6762205cfa8bc55a8e859\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:23:38.394539 containerd[1585]: time="2025-05-27T18:23:38.393892772Z" level=error msg="Failed to destroy network for sandbox \"e270924aae0ad01472f21d849b5b05714a55a9311a2639f724aa1184698102bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:23:38.400409 containerd[1585]: time="2025-05-27T18:23:38.400207520Z" level=error msg="Failed to destroy network for sandbox \"2e34edf217b356794618e836ebe7f79af779c4476d7980421a85b57bff9b080b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:23:38.401377 containerd[1585]: time="2025-05-27T18:23:38.401148756Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dfc469868-jbh4w,Uid:816977dc-b340-4030-b8c8-4987b44f32c4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e270924aae0ad01472f21d849b5b05714a55a9311a2639f724aa1184698102bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:23:38.401773 containerd[1585]: time="2025-05-27T18:23:38.401737180Z" level=error msg="Failed to destroy network for sandbox \"93ba98c815c68bd4e005de64bb229d719a325b139ce31eedb42be5c37c4c28d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:23:38.402030 kubelet[2794]: E0527 18:23:38.401913 2794 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e270924aae0ad01472f21d849b5b05714a55a9311a2639f724aa1184698102bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:23:38.402171 kubelet[2794]: E0527 18:23:38.402148 2794 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e270924aae0ad01472f21d849b5b05714a55a9311a2639f724aa1184698102bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dfc469868-jbh4w" May 27 18:23:38.402171 kubelet[2794]: E0527 18:23:38.402243 2794 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e270924aae0ad01472f21d849b5b05714a55a9311a2639f724aa1184698102bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dfc469868-jbh4w" May 27 18:23:38.402472 kubelet[2794]: E0527 18:23:38.402444 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dfc469868-jbh4w_calico-apiserver(816977dc-b340-4030-b8c8-4987b44f32c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dfc469868-jbh4w_calico-apiserver(816977dc-b340-4030-b8c8-4987b44f32c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e270924aae0ad01472f21d849b5b05714a55a9311a2639f724aa1184698102bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dfc469868-jbh4w" podUID="816977dc-b340-4030-b8c8-4987b44f32c4" May 27 18:23:38.403956 containerd[1585]: time="2025-05-27T18:23:38.402549883Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59576795c9-hd85v,Uid:3e2bb785-ea28-432f-9ef8-e65bb1551704,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e34edf217b356794618e836ebe7f79af779c4476d7980421a85b57bff9b080b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:23:38.404216 kubelet[2794]: E0527 18:23:38.403793 2794 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e34edf217b356794618e836ebe7f79af779c4476d7980421a85b57bff9b080b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:23:38.404470 kubelet[2794]: E0527 18:23:38.404390 2794 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e34edf217b356794618e836ebe7f79af779c4476d7980421a85b57bff9b080b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59576795c9-hd85v" May 27 18:23:38.404622 kubelet[2794]: E0527 18:23:38.404580 2794 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e34edf217b356794618e836ebe7f79af779c4476d7980421a85b57bff9b080b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59576795c9-hd85v" May 27 18:23:38.404971 kubelet[2794]: E0527 18:23:38.404938 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-59576795c9-hd85v_calico-system(3e2bb785-ea28-432f-9ef8-e65bb1551704)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-59576795c9-hd85v_calico-system(3e2bb785-ea28-432f-9ef8-e65bb1551704)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e34edf217b356794618e836ebe7f79af779c4476d7980421a85b57bff9b080b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59576795c9-hd85v" podUID="3e2bb785-ea28-432f-9ef8-e65bb1551704" May 27 18:23:38.405359 containerd[1585]: time="2025-05-27T18:23:38.405316342Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rg64c,Uid:f313ef21-ca0c-4dfd-bf19-451add114318,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"93ba98c815c68bd4e005de64bb229d719a325b139ce31eedb42be5c37c4c28d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:23:38.405524 kubelet[2794]: E0527 18:23:38.405487 2794 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93ba98c815c68bd4e005de64bb229d719a325b139ce31eedb42be5c37c4c28d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:23:38.405565 kubelet[2794]: E0527 18:23:38.405551 2794 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93ba98c815c68bd4e005de64bb229d719a325b139ce31eedb42be5c37c4c28d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rg64c" May 27 18:23:38.405736 kubelet[2794]: E0527 18:23:38.405575 2794 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93ba98c815c68bd4e005de64bb229d719a325b139ce31eedb42be5c37c4c28d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rg64c" May 27 18:23:38.405736 kubelet[2794]: E0527 18:23:38.405631 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-rg64c_kube-system(f313ef21-ca0c-4dfd-bf19-451add114318)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-rg64c_kube-system(f313ef21-ca0c-4dfd-bf19-451add114318)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93ba98c815c68bd4e005de64bb229d719a325b139ce31eedb42be5c37c4c28d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-rg64c" podUID="f313ef21-ca0c-4dfd-bf19-451add114318" May 27 18:23:38.560073 systemd[1]: Created slice kubepods-besteffort-pod5f8d96ad_baa3_44d5_afb2_3ebebf965cf1.slice - libcontainer container kubepods-besteffort-pod5f8d96ad_baa3_44d5_afb2_3ebebf965cf1.slice. May 27 18:23:38.567520 containerd[1585]: time="2025-05-27T18:23:38.567331569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rmtkq,Uid:5f8d96ad-baa3-44d5-afb2-3ebebf965cf1,Namespace:calico-system,Attempt:0,}" May 27 18:23:38.649406 kubelet[2794]: E0527 18:23:38.649112 2794 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition May 27 18:23:38.650160 kubelet[2794]: E0527 18:23:38.649546 2794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d213ce18-dfff-40dc-8a57-832e1856060d-whisker-ca-bundle podName:d213ce18-dfff-40dc-8a57-832e1856060d nodeName:}" failed. No retries permitted until 2025-05-27 18:23:39.149446738 +0000 UTC m=+37.777199843 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/d213ce18-dfff-40dc-8a57-832e1856060d-whisker-ca-bundle") pod "whisker-b774fd5f7-sdqww" (UID: "d213ce18-dfff-40dc-8a57-832e1856060d") : failed to sync configmap cache: timed out waiting for the condition May 27 18:23:38.672423 containerd[1585]: time="2025-05-27T18:23:38.672318839Z" level=error msg="Failed to destroy network for sandbox \"14163ba614fc70df0b11a4f18f6ba46ba6a3ecf969725418df84a7cdef819a80\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:23:38.674966 containerd[1585]: time="2025-05-27T18:23:38.674874948Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rmtkq,Uid:5f8d96ad-baa3-44d5-afb2-3ebebf965cf1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"14163ba614fc70df0b11a4f18f6ba46ba6a3ecf969725418df84a7cdef819a80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:23:38.675724 kubelet[2794]: E0527 18:23:38.675235 2794 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14163ba614fc70df0b11a4f18f6ba46ba6a3ecf969725418df84a7cdef819a80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:23:38.675724 kubelet[2794]: E0527 18:23:38.675312 2794 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14163ba614fc70df0b11a4f18f6ba46ba6a3ecf969725418df84a7cdef819a80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rmtkq" May 27 18:23:38.675724 kubelet[2794]: E0527 18:23:38.675342 2794 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14163ba614fc70df0b11a4f18f6ba46ba6a3ecf969725418df84a7cdef819a80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rmtkq" May 27 18:23:38.675973 kubelet[2794]: E0527 18:23:38.675402 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rmtkq_calico-system(5f8d96ad-baa3-44d5-afb2-3ebebf965cf1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rmtkq_calico-system(5f8d96ad-baa3-44d5-afb2-3ebebf965cf1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14163ba614fc70df0b11a4f18f6ba46ba6a3ecf969725418df84a7cdef819a80\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rmtkq" podUID="5f8d96ad-baa3-44d5-afb2-3ebebf965cf1" May 27 18:23:38.809157 containerd[1585]: time="2025-05-27T18:23:38.809092678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 27 18:23:38.894657 systemd[1]: run-netns-cni\x2dbbc1ac5a\x2d66f9\x2d176f\x2d2a46\x2d66af30cf6858.mount: Deactivated successfully. May 27 18:23:39.345832 containerd[1585]: time="2025-05-27T18:23:39.345678739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b774fd5f7-sdqww,Uid:d213ce18-dfff-40dc-8a57-832e1856060d,Namespace:calico-system,Attempt:0,}" May 27 18:23:39.490245 containerd[1585]: time="2025-05-27T18:23:39.490049790Z" level=error msg="Failed to destroy network for sandbox \"d1a577424c6610b128e67f0d7315f176b2e399cb6d3666cf0c8b8b3deea77337\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:23:39.494848 containerd[1585]: time="2025-05-27T18:23:39.494657905Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b774fd5f7-sdqww,Uid:d213ce18-dfff-40dc-8a57-832e1856060d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1a577424c6610b128e67f0d7315f176b2e399cb6d3666cf0c8b8b3deea77337\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:23:39.497989 kubelet[2794]: E0527 18:23:39.497313 2794 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1a577424c6610b128e67f0d7315f176b2e399cb6d3666cf0c8b8b3deea77337\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:23:39.497989 kubelet[2794]: E0527 18:23:39.497537 2794 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1a577424c6610b128e67f0d7315f176b2e399cb6d3666cf0c8b8b3deea77337\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b774fd5f7-sdqww" May 27 18:23:39.497989 kubelet[2794]: E0527 18:23:39.497630 2794 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1a577424c6610b128e67f0d7315f176b2e399cb6d3666cf0c8b8b3deea77337\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b774fd5f7-sdqww" May 27 18:23:39.499271 kubelet[2794]: E0527 18:23:39.498995 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-b774fd5f7-sdqww_calico-system(d213ce18-dfff-40dc-8a57-832e1856060d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-b774fd5f7-sdqww_calico-system(d213ce18-dfff-40dc-8a57-832e1856060d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1a577424c6610b128e67f0d7315f176b2e399cb6d3666cf0c8b8b3deea77337\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b774fd5f7-sdqww" podUID="d213ce18-dfff-40dc-8a57-832e1856060d" May 27 18:23:39.502124 systemd[1]: run-netns-cni\x2db1127372\x2d7ffc\x2d83ea\x2dcba7\x2d91a2ec750f1d.mount: Deactivated successfully. May 27 18:23:48.451177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount226169997.mount: Deactivated successfully. May 27 18:23:48.542591 containerd[1585]: time="2025-05-27T18:23:48.542359653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:23:48.545321 containerd[1585]: time="2025-05-27T18:23:48.544569713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 27 18:23:48.548052 containerd[1585]: time="2025-05-27T18:23:48.547927929Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:23:48.554903 containerd[1585]: time="2025-05-27T18:23:48.554649082Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:23:48.556306 containerd[1585]: time="2025-05-27T18:23:48.556076850Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 9.74630224s" May 27 18:23:48.556306 containerd[1585]: time="2025-05-27T18:23:48.556167212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 27 18:23:48.626773 containerd[1585]: time="2025-05-27T18:23:48.625559605Z" level=info msg="CreateContainer within sandbox \"87b4dbd1d9bc7324f49c6fb17ba520db9816fe166689b6f094da03afb934ce4b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 27 18:23:48.664396 containerd[1585]: time="2025-05-27T18:23:48.664345523Z" level=info msg="Container fa387d247c6751a9fd3e4bb2659b7f075b4326082ca83aebfaca596f8e54d0a6: CDI devices from CRI Config.CDIDevices: []" May 27 18:23:48.683081 containerd[1585]: time="2025-05-27T18:23:48.682951070Z" level=info msg="CreateContainer within sandbox \"87b4dbd1d9bc7324f49c6fb17ba520db9816fe166689b6f094da03afb934ce4b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fa387d247c6751a9fd3e4bb2659b7f075b4326082ca83aebfaca596f8e54d0a6\"" May 27 18:23:48.684898 containerd[1585]: time="2025-05-27T18:23:48.684866237Z" level=info msg="StartContainer for \"fa387d247c6751a9fd3e4bb2659b7f075b4326082ca83aebfaca596f8e54d0a6\"" May 27 18:23:48.686953 containerd[1585]: time="2025-05-27T18:23:48.686922748Z" level=info msg="connecting to shim fa387d247c6751a9fd3e4bb2659b7f075b4326082ca83aebfaca596f8e54d0a6" address="unix:///run/containerd/s/b0e8762f40b13f8580e0de75bf00d06af10e6608e44ebd69e12dfdf1de43ca86" protocol=ttrpc version=3 May 27 18:23:48.786871 systemd[1]: Started cri-containerd-fa387d247c6751a9fd3e4bb2659b7f075b4326082ca83aebfaca596f8e54d0a6.scope - libcontainer container fa387d247c6751a9fd3e4bb2659b7f075b4326082ca83aebfaca596f8e54d0a6. May 27 18:23:48.886380 containerd[1585]: time="2025-05-27T18:23:48.885808874Z" level=info msg="StartContainer for \"fa387d247c6751a9fd3e4bb2659b7f075b4326082ca83aebfaca596f8e54d0a6\" returns successfully" May 27 18:23:49.004138 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 27 18:23:49.004326 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 27 18:23:49.253711 kubelet[2794]: I0527 18:23:49.253085 2794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwdjd\" (UniqueName: \"kubernetes.io/projected/d213ce18-dfff-40dc-8a57-832e1856060d-kube-api-access-nwdjd\") pod \"d213ce18-dfff-40dc-8a57-832e1856060d\" (UID: \"d213ce18-dfff-40dc-8a57-832e1856060d\") " May 27 18:23:49.253711 kubelet[2794]: I0527 18:23:49.253193 2794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d213ce18-dfff-40dc-8a57-832e1856060d-whisker-backend-key-pair\") pod \"d213ce18-dfff-40dc-8a57-832e1856060d\" (UID: \"d213ce18-dfff-40dc-8a57-832e1856060d\") " May 27 18:23:49.253711 kubelet[2794]: I0527 18:23:49.253251 2794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d213ce18-dfff-40dc-8a57-832e1856060d-whisker-ca-bundle\") pod \"d213ce18-dfff-40dc-8a57-832e1856060d\" (UID: \"d213ce18-dfff-40dc-8a57-832e1856060d\") " May 27 18:23:49.254845 kubelet[2794]: I0527 18:23:49.253909 2794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d213ce18-dfff-40dc-8a57-832e1856060d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d213ce18-dfff-40dc-8a57-832e1856060d" (UID: "d213ce18-dfff-40dc-8a57-832e1856060d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 27 18:23:49.262315 kubelet[2794]: I0527 18:23:49.262091 2794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d213ce18-dfff-40dc-8a57-832e1856060d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d213ce18-dfff-40dc-8a57-832e1856060d" (UID: "d213ce18-dfff-40dc-8a57-832e1856060d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" May 27 18:23:49.262315 kubelet[2794]: I0527 18:23:49.262233 2794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d213ce18-dfff-40dc-8a57-832e1856060d-kube-api-access-nwdjd" (OuterVolumeSpecName: "kube-api-access-nwdjd") pod "d213ce18-dfff-40dc-8a57-832e1856060d" (UID: "d213ce18-dfff-40dc-8a57-832e1856060d"). InnerVolumeSpecName "kube-api-access-nwdjd". PluginName "kubernetes.io/projected", VolumeGidValue "" May 27 18:23:49.354652 kubelet[2794]: I0527 18:23:49.354599 2794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwdjd\" (UniqueName: \"kubernetes.io/projected/d213ce18-dfff-40dc-8a57-832e1856060d-kube-api-access-nwdjd\") on node \"ci-4344-0-0-3-6dd1c807ec.novalocal\" DevicePath \"\"" May 27 18:23:49.354903 kubelet[2794]: I0527 18:23:49.354864 2794 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d213ce18-dfff-40dc-8a57-832e1856060d-whisker-backend-key-pair\") on node \"ci-4344-0-0-3-6dd1c807ec.novalocal\" DevicePath \"\"" May 27 18:23:49.354903 kubelet[2794]: I0527 18:23:49.354885 2794 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d213ce18-dfff-40dc-8a57-832e1856060d-whisker-ca-bundle\") on node \"ci-4344-0-0-3-6dd1c807ec.novalocal\" DevicePath \"\"" May 27 18:23:49.451203 systemd[1]: var-lib-kubelet-pods-d213ce18\x2ddfff\x2d40dc\x2d8a57\x2d832e1856060d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnwdjd.mount: Deactivated successfully. May 27 18:23:49.451533 systemd[1]: var-lib-kubelet-pods-d213ce18\x2ddfff\x2d40dc\x2d8a57\x2d832e1856060d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 27 18:23:49.540005 containerd[1585]: time="2025-05-27T18:23:49.539663196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59576795c9-hd85v,Uid:3e2bb785-ea28-432f-9ef8-e65bb1551704,Namespace:calico-system,Attempt:0,}" May 27 18:23:49.541853 containerd[1585]: time="2025-05-27T18:23:49.540575647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rg64c,Uid:f313ef21-ca0c-4dfd-bf19-451add114318,Namespace:kube-system,Attempt:0,}" May 27 18:23:49.588471 systemd[1]: Removed slice kubepods-besteffort-podd213ce18_dfff_40dc_8a57_832e1856060d.slice - libcontainer container kubepods-besteffort-podd213ce18_dfff_40dc_8a57_832e1856060d.slice. May 27 18:23:49.831939 systemd-networkd[1457]: cali464d9420987: Link UP May 27 18:23:49.833303 systemd-networkd[1457]: cali464d9420987: Gained carrier May 27 18:23:49.879920 containerd[1585]: 2025-05-27 18:23:49.628 [INFO][3833] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 18:23:49.879920 containerd[1585]: 2025-05-27 18:23:49.692 [INFO][3833] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-coredns--7c65d6cfc9--rg64c-eth0 coredns-7c65d6cfc9- kube-system f313ef21-ca0c-4dfd-bf19-451add114318 851 0 2025-05-27 18:23:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344-0-0-3-6dd1c807ec.novalocal coredns-7c65d6cfc9-rg64c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali464d9420987 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="aa843dfc111c484115eb80b16aac8a01a1f7bf177b036ef80416575be18a1e33" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rg64c" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-coredns--7c65d6cfc9--rg64c-" May 27 18:23:49.879920 containerd[1585]: 2025-05-27 18:23:49.692 [INFO][3833] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aa843dfc111c484115eb80b16aac8a01a1f7bf177b036ef80416575be18a1e33" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rg64c" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-coredns--7c65d6cfc9--rg64c-eth0" May 27 18:23:49.879920 containerd[1585]: 2025-05-27 18:23:49.751 [INFO][3854] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aa843dfc111c484115eb80b16aac8a01a1f7bf177b036ef80416575be18a1e33" HandleID="k8s-pod-network.aa843dfc111c484115eb80b16aac8a01a1f7bf177b036ef80416575be18a1e33" Workload="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-coredns--7c65d6cfc9--rg64c-eth0" May 27 18:23:49.883156 containerd[1585]: 2025-05-27 18:23:49.751 [INFO][3854] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aa843dfc111c484115eb80b16aac8a01a1f7bf177b036ef80416575be18a1e33" HandleID="k8s-pod-network.aa843dfc111c484115eb80b16aac8a01a1f7bf177b036ef80416575be18a1e33" Workload="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-coredns--7c65d6cfc9--rg64c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d96d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344-0-0-3-6dd1c807ec.novalocal", "pod":"coredns-7c65d6cfc9-rg64c", "timestamp":"2025-05-27 18:23:49.75034874 +0000 UTC"}, Hostname:"ci-4344-0-0-3-6dd1c807ec.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 18:23:49.883156 containerd[1585]: 2025-05-27 18:23:49.751 [INFO][3854] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 18:23:49.883156 containerd[1585]: 2025-05-27 18:23:49.751 [INFO][3854] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 18:23:49.883156 containerd[1585]: 2025-05-27 18:23:49.751 [INFO][3854] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-0-0-3-6dd1c807ec.novalocal' May 27 18:23:49.883156 containerd[1585]: 2025-05-27 18:23:49.764 [INFO][3854] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aa843dfc111c484115eb80b16aac8a01a1f7bf177b036ef80416575be18a1e33" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:49.883156 containerd[1585]: 2025-05-27 18:23:49.771 [INFO][3854] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:49.883156 containerd[1585]: 2025-05-27 18:23:49.779 [INFO][3854] ipam/ipam.go 511: Trying affinity for 192.168.123.64/26 host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:49.883156 containerd[1585]: 2025-05-27 18:23:49.783 [INFO][3854] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.64/26 host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:49.883156 containerd[1585]: 2025-05-27 18:23:49.787 [INFO][3854] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.64/26 host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:49.884585 containerd[1585]: 2025-05-27 18:23:49.787 [INFO][3854] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.64/26 handle="k8s-pod-network.aa843dfc111c484115eb80b16aac8a01a1f7bf177b036ef80416575be18a1e33" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:49.884585 containerd[1585]: 2025-05-27 18:23:49.791 [INFO][3854] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.aa843dfc111c484115eb80b16aac8a01a1f7bf177b036ef80416575be18a1e33 May 27 18:23:49.884585 containerd[1585]: 2025-05-27 18:23:49.797 [INFO][3854] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.64/26 handle="k8s-pod-network.aa843dfc111c484115eb80b16aac8a01a1f7bf177b036ef80416575be18a1e33" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:49.884585 containerd[1585]: 2025-05-27 18:23:49.807 [INFO][3854] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.65/26] block=192.168.123.64/26 handle="k8s-pod-network.aa843dfc111c484115eb80b16aac8a01a1f7bf177b036ef80416575be18a1e33" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:49.884585 containerd[1585]: 2025-05-27 18:23:49.807 [INFO][3854] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.65/26] handle="k8s-pod-network.aa843dfc111c484115eb80b16aac8a01a1f7bf177b036ef80416575be18a1e33" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:49.884585 containerd[1585]: 2025-05-27 18:23:49.807 [INFO][3854] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 18:23:49.884585 containerd[1585]: 2025-05-27 18:23:49.807 [INFO][3854] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.65/26] IPv6=[] ContainerID="aa843dfc111c484115eb80b16aac8a01a1f7bf177b036ef80416575be18a1e33" HandleID="k8s-pod-network.aa843dfc111c484115eb80b16aac8a01a1f7bf177b036ef80416575be18a1e33" Workload="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-coredns--7c65d6cfc9--rg64c-eth0" May 27 18:23:49.884829 containerd[1585]: 2025-05-27 18:23:49.814 [INFO][3833] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aa843dfc111c484115eb80b16aac8a01a1f7bf177b036ef80416575be18a1e33" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rg64c" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-coredns--7c65d6cfc9--rg64c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-coredns--7c65d6cfc9--rg64c-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f313ef21-ca0c-4dfd-bf19-451add114318", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 23, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-3-6dd1c807ec.novalocal", ContainerID:"", Pod:"coredns-7c65d6cfc9-rg64c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali464d9420987", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:23:49.884829 containerd[1585]: 2025-05-27 18:23:49.815 [INFO][3833] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.65/32] ContainerID="aa843dfc111c484115eb80b16aac8a01a1f7bf177b036ef80416575be18a1e33" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rg64c" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-coredns--7c65d6cfc9--rg64c-eth0" May 27 18:23:49.884829 containerd[1585]: 2025-05-27 18:23:49.815 [INFO][3833] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali464d9420987 ContainerID="aa843dfc111c484115eb80b16aac8a01a1f7bf177b036ef80416575be18a1e33" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rg64c" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-coredns--7c65d6cfc9--rg64c-eth0" May 27 18:23:49.884829 containerd[1585]: 2025-05-27 18:23:49.835 [INFO][3833] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aa843dfc111c484115eb80b16aac8a01a1f7bf177b036ef80416575be18a1e33" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rg64c" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-coredns--7c65d6cfc9--rg64c-eth0" May 27 18:23:49.884829 containerd[1585]: 2025-05-27 18:23:49.837 [INFO][3833] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aa843dfc111c484115eb80b16aac8a01a1f7bf177b036ef80416575be18a1e33" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rg64c" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-coredns--7c65d6cfc9--rg64c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-coredns--7c65d6cfc9--rg64c-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f313ef21-ca0c-4dfd-bf19-451add114318", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 23, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-3-6dd1c807ec.novalocal", ContainerID:"aa843dfc111c484115eb80b16aac8a01a1f7bf177b036ef80416575be18a1e33", Pod:"coredns-7c65d6cfc9-rg64c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali464d9420987", MAC:"6e:11:1b:32:ff:33", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:23:49.884829 containerd[1585]: 2025-05-27 18:23:49.874 [INFO][3833] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aa843dfc111c484115eb80b16aac8a01a1f7bf177b036ef80416575be18a1e33" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rg64c" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-coredns--7c65d6cfc9--rg64c-eth0" May 27 18:23:49.902160 kubelet[2794]: I0527 18:23:49.902041 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-v9nnf" podStartSLOduration=1.933503757 podStartE2EDuration="26.902001352s" podCreationTimestamp="2025-05-27 18:23:23 +0000 UTC" firstStartedPulling="2025-05-27 18:23:23.591374109 +0000 UTC m=+22.219127174" lastFinishedPulling="2025-05-27 18:23:48.559871663 +0000 UTC m=+47.187624769" observedRunningTime="2025-05-27 18:23:49.900474829 +0000 UTC m=+48.528227885" watchObservedRunningTime="2025-05-27 18:23:49.902001352 +0000 UTC m=+48.529754417" May 27 18:23:49.996986 systemd-networkd[1457]: calia453a0a031a: Link UP May 27 18:23:50.001301 systemd-networkd[1457]: calia453a0a031a: Gained carrier May 27 18:23:50.046386 containerd[1585]: 2025-05-27 18:23:49.641 [INFO][3830] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 18:23:50.046386 containerd[1585]: 2025-05-27 18:23:49.692 [INFO][3830] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--kube--controllers--59576795c9--hd85v-eth0 calico-kube-controllers-59576795c9- calico-system 3e2bb785-ea28-432f-9ef8-e65bb1551704 853 0 2025-05-27 18:23:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:59576795c9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4344-0-0-3-6dd1c807ec.novalocal calico-kube-controllers-59576795c9-hd85v eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia453a0a031a [] [] }} ContainerID="dd41d98f15f637588afcb0b7348149f786cc56c9ac7291a637b985a644e50d5e" Namespace="calico-system" Pod="calico-kube-controllers-59576795c9-hd85v" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--kube--controllers--59576795c9--hd85v-" May 27 18:23:50.046386 containerd[1585]: 2025-05-27 18:23:49.693 [INFO][3830] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dd41d98f15f637588afcb0b7348149f786cc56c9ac7291a637b985a644e50d5e" Namespace="calico-system" Pod="calico-kube-controllers-59576795c9-hd85v" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--kube--controllers--59576795c9--hd85v-eth0" May 27 18:23:50.046386 containerd[1585]: 2025-05-27 18:23:49.754 [INFO][3852] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dd41d98f15f637588afcb0b7348149f786cc56c9ac7291a637b985a644e50d5e" HandleID="k8s-pod-network.dd41d98f15f637588afcb0b7348149f786cc56c9ac7291a637b985a644e50d5e" Workload="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--kube--controllers--59576795c9--hd85v-eth0" May 27 18:23:50.046386 containerd[1585]: 2025-05-27 18:23:49.754 [INFO][3852] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dd41d98f15f637588afcb0b7348149f786cc56c9ac7291a637b985a644e50d5e" HandleID="k8s-pod-network.dd41d98f15f637588afcb0b7348149f786cc56c9ac7291a637b985a644e50d5e" Workload="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--kube--controllers--59576795c9--hd85v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030f790), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344-0-0-3-6dd1c807ec.novalocal", "pod":"calico-kube-controllers-59576795c9-hd85v", "timestamp":"2025-05-27 18:23:49.754459127 +0000 UTC"}, Hostname:"ci-4344-0-0-3-6dd1c807ec.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 18:23:50.046386 containerd[1585]: 2025-05-27 18:23:49.755 [INFO][3852] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 18:23:50.046386 containerd[1585]: 2025-05-27 18:23:49.807 [INFO][3852] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 18:23:50.046386 containerd[1585]: 2025-05-27 18:23:49.807 [INFO][3852] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-0-0-3-6dd1c807ec.novalocal' May 27 18:23:50.046386 containerd[1585]: 2025-05-27 18:23:49.866 [INFO][3852] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dd41d98f15f637588afcb0b7348149f786cc56c9ac7291a637b985a644e50d5e" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:50.046386 containerd[1585]: 2025-05-27 18:23:49.891 [INFO][3852] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:50.046386 containerd[1585]: 2025-05-27 18:23:49.914 [INFO][3852] ipam/ipam.go 511: Trying affinity for 192.168.123.64/26 host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:50.046386 containerd[1585]: 2025-05-27 18:23:49.920 [INFO][3852] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.64/26 host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:50.046386 containerd[1585]: 2025-05-27 18:23:49.927 [INFO][3852] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.64/26 host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:50.046386 containerd[1585]: 2025-05-27 18:23:49.927 [INFO][3852] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.64/26 handle="k8s-pod-network.dd41d98f15f637588afcb0b7348149f786cc56c9ac7291a637b985a644e50d5e" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:50.046386 containerd[1585]: 2025-05-27 18:23:49.935 [INFO][3852] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dd41d98f15f637588afcb0b7348149f786cc56c9ac7291a637b985a644e50d5e May 27 18:23:50.046386 containerd[1585]: 2025-05-27 18:23:49.961 [INFO][3852] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.64/26 handle="k8s-pod-network.dd41d98f15f637588afcb0b7348149f786cc56c9ac7291a637b985a644e50d5e" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:50.046386 containerd[1585]: 2025-05-27 18:23:49.979 [INFO][3852] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.66/26] block=192.168.123.64/26 handle="k8s-pod-network.dd41d98f15f637588afcb0b7348149f786cc56c9ac7291a637b985a644e50d5e" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:50.046386 containerd[1585]: 2025-05-27 18:23:49.980 [INFO][3852] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.66/26] handle="k8s-pod-network.dd41d98f15f637588afcb0b7348149f786cc56c9ac7291a637b985a644e50d5e" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:50.046386 containerd[1585]: 2025-05-27 18:23:49.980 [INFO][3852] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 18:23:50.046386 containerd[1585]: 2025-05-27 18:23:49.980 [INFO][3852] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.66/26] IPv6=[] ContainerID="dd41d98f15f637588afcb0b7348149f786cc56c9ac7291a637b985a644e50d5e" HandleID="k8s-pod-network.dd41d98f15f637588afcb0b7348149f786cc56c9ac7291a637b985a644e50d5e" Workload="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--kube--controllers--59576795c9--hd85v-eth0" May 27 18:23:50.048703 containerd[1585]: 2025-05-27 18:23:49.984 [INFO][3830] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dd41d98f15f637588afcb0b7348149f786cc56c9ac7291a637b985a644e50d5e" Namespace="calico-system" Pod="calico-kube-controllers-59576795c9-hd85v" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--kube--controllers--59576795c9--hd85v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--kube--controllers--59576795c9--hd85v-eth0", GenerateName:"calico-kube-controllers-59576795c9-", Namespace:"calico-system", SelfLink:"", UID:"3e2bb785-ea28-432f-9ef8-e65bb1551704", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 23, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59576795c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-3-6dd1c807ec.novalocal", ContainerID:"", Pod:"calico-kube-controllers-59576795c9-hd85v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.123.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia453a0a031a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:23:50.048703 containerd[1585]: 2025-05-27 18:23:49.988 [INFO][3830] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.66/32] ContainerID="dd41d98f15f637588afcb0b7348149f786cc56c9ac7291a637b985a644e50d5e" Namespace="calico-system" Pod="calico-kube-controllers-59576795c9-hd85v" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--kube--controllers--59576795c9--hd85v-eth0" May 27 18:23:50.048703 containerd[1585]: 2025-05-27 18:23:49.988 [INFO][3830] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia453a0a031a ContainerID="dd41d98f15f637588afcb0b7348149f786cc56c9ac7291a637b985a644e50d5e" Namespace="calico-system" Pod="calico-kube-controllers-59576795c9-hd85v" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--kube--controllers--59576795c9--hd85v-eth0" May 27 18:23:50.048703 containerd[1585]: 2025-05-27 18:23:50.000 [INFO][3830] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dd41d98f15f637588afcb0b7348149f786cc56c9ac7291a637b985a644e50d5e" Namespace="calico-system" Pod="calico-kube-controllers-59576795c9-hd85v" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--kube--controllers--59576795c9--hd85v-eth0" May 27 18:23:50.048703 containerd[1585]: 2025-05-27 18:23:50.002 [INFO][3830] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dd41d98f15f637588afcb0b7348149f786cc56c9ac7291a637b985a644e50d5e" Namespace="calico-system" Pod="calico-kube-controllers-59576795c9-hd85v" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--kube--controllers--59576795c9--hd85v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--kube--controllers--59576795c9--hd85v-eth0", GenerateName:"calico-kube-controllers-59576795c9-", Namespace:"calico-system", SelfLink:"", UID:"3e2bb785-ea28-432f-9ef8-e65bb1551704", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 23, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59576795c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-3-6dd1c807ec.novalocal", ContainerID:"dd41d98f15f637588afcb0b7348149f786cc56c9ac7291a637b985a644e50d5e", Pod:"calico-kube-controllers-59576795c9-hd85v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.123.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia453a0a031a", MAC:"2a:2b:e1:78:a6:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:23:50.048703 containerd[1585]: 2025-05-27 18:23:50.040 [INFO][3830] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dd41d98f15f637588afcb0b7348149f786cc56c9ac7291a637b985a644e50d5e" Namespace="calico-system" Pod="calico-kube-controllers-59576795c9-hd85v" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--kube--controllers--59576795c9--hd85v-eth0" May 27 18:23:50.075948 systemd[1]: Created slice kubepods-besteffort-pod107d1d0f_9dd5_43de_81cb_0f8c43731395.slice - libcontainer container kubepods-besteffort-pod107d1d0f_9dd5_43de_81cb_0f8c43731395.slice. May 27 18:23:50.103525 containerd[1585]: time="2025-05-27T18:23:50.101895285Z" level=info msg="connecting to shim aa843dfc111c484115eb80b16aac8a01a1f7bf177b036ef80416575be18a1e33" address="unix:///run/containerd/s/7201a1266d2f65a44edc682eabaf51b718bf34f5099e67113b5b8462b8140532" namespace=k8s.io protocol=ttrpc version=3 May 27 18:23:50.162386 kubelet[2794]: I0527 18:23:50.162336 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqlkp\" (UniqueName: \"kubernetes.io/projected/107d1d0f-9dd5-43de-81cb-0f8c43731395-kube-api-access-wqlkp\") pod \"whisker-5f7dd7cb5-258cr\" (UID: \"107d1d0f-9dd5-43de-81cb-0f8c43731395\") " pod="calico-system/whisker-5f7dd7cb5-258cr" May 27 18:23:50.162386 kubelet[2794]: I0527 18:23:50.162395 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/107d1d0f-9dd5-43de-81cb-0f8c43731395-whisker-backend-key-pair\") pod \"whisker-5f7dd7cb5-258cr\" (UID: \"107d1d0f-9dd5-43de-81cb-0f8c43731395\") " pod="calico-system/whisker-5f7dd7cb5-258cr" May 27 18:23:50.162630 kubelet[2794]: I0527 18:23:50.162417 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/107d1d0f-9dd5-43de-81cb-0f8c43731395-whisker-ca-bundle\") pod \"whisker-5f7dd7cb5-258cr\" (UID: \"107d1d0f-9dd5-43de-81cb-0f8c43731395\") " pod="calico-system/whisker-5f7dd7cb5-258cr" May 27 18:23:50.171866 systemd[1]: Started cri-containerd-aa843dfc111c484115eb80b16aac8a01a1f7bf177b036ef80416575be18a1e33.scope - libcontainer container aa843dfc111c484115eb80b16aac8a01a1f7bf177b036ef80416575be18a1e33. May 27 18:23:50.175637 containerd[1585]: time="2025-05-27T18:23:50.175568416Z" level=info msg="connecting to shim dd41d98f15f637588afcb0b7348149f786cc56c9ac7291a637b985a644e50d5e" address="unix:///run/containerd/s/c25625de3e91f7bc73dc5be05acb89ac88f1427141ed5a3c28d5d76d08967960" namespace=k8s.io protocol=ttrpc version=3 May 27 18:23:50.251962 systemd[1]: Started cri-containerd-dd41d98f15f637588afcb0b7348149f786cc56c9ac7291a637b985a644e50d5e.scope - libcontainer container dd41d98f15f637588afcb0b7348149f786cc56c9ac7291a637b985a644e50d5e. May 27 18:23:50.393887 containerd[1585]: time="2025-05-27T18:23:50.393742646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f7dd7cb5-258cr,Uid:107d1d0f-9dd5-43de-81cb-0f8c43731395,Namespace:calico-system,Attempt:0,}" May 27 18:23:50.407573 containerd[1585]: time="2025-05-27T18:23:50.407504835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rg64c,Uid:f313ef21-ca0c-4dfd-bf19-451add114318,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa843dfc111c484115eb80b16aac8a01a1f7bf177b036ef80416575be18a1e33\"" May 27 18:23:50.420483 containerd[1585]: time="2025-05-27T18:23:50.418779219Z" level=info msg="CreateContainer within sandbox \"aa843dfc111c484115eb80b16aac8a01a1f7bf177b036ef80416575be18a1e33\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 18:23:50.491722 containerd[1585]: time="2025-05-27T18:23:50.491230340Z" level=info msg="Container 06bb8fc11aede6dc364995f62a29f8f72bd635eaa5b5ae0c3083d7e13ca8d683: CDI devices from CRI Config.CDIDevices: []" May 27 18:23:50.498518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount468774970.mount: Deactivated successfully. May 27 18:23:50.511225 containerd[1585]: time="2025-05-27T18:23:50.511167705Z" level=info msg="CreateContainer within sandbox \"aa843dfc111c484115eb80b16aac8a01a1f7bf177b036ef80416575be18a1e33\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"06bb8fc11aede6dc364995f62a29f8f72bd635eaa5b5ae0c3083d7e13ca8d683\"" May 27 18:23:50.513053 containerd[1585]: time="2025-05-27T18:23:50.513020417Z" level=info msg="StartContainer for \"06bb8fc11aede6dc364995f62a29f8f72bd635eaa5b5ae0c3083d7e13ca8d683\"" May 27 18:23:50.516216 containerd[1585]: time="2025-05-27T18:23:50.515998024Z" level=info msg="connecting to shim 06bb8fc11aede6dc364995f62a29f8f72bd635eaa5b5ae0c3083d7e13ca8d683" address="unix:///run/containerd/s/7201a1266d2f65a44edc682eabaf51b718bf34f5099e67113b5b8462b8140532" protocol=ttrpc version=3 May 27 18:23:50.622011 systemd[1]: Started cri-containerd-06bb8fc11aede6dc364995f62a29f8f72bd635eaa5b5ae0c3083d7e13ca8d683.scope - libcontainer container 06bb8fc11aede6dc364995f62a29f8f72bd635eaa5b5ae0c3083d7e13ca8d683. May 27 18:23:50.690823 containerd[1585]: time="2025-05-27T18:23:50.690096867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59576795c9-hd85v,Uid:3e2bb785-ea28-432f-9ef8-e65bb1551704,Namespace:calico-system,Attempt:0,} returns sandbox id \"dd41d98f15f637588afcb0b7348149f786cc56c9ac7291a637b985a644e50d5e\"" May 27 18:23:50.698301 containerd[1585]: time="2025-05-27T18:23:50.698256318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 27 18:23:50.764933 containerd[1585]: time="2025-05-27T18:23:50.764844731Z" level=info msg="StartContainer for \"06bb8fc11aede6dc364995f62a29f8f72bd635eaa5b5ae0c3083d7e13ca8d683\" returns successfully" May 27 18:23:50.816865 systemd-networkd[1457]: cali8f1bd367375: Link UP May 27 18:23:50.819641 systemd-networkd[1457]: cali8f1bd367375: Gained carrier May 27 18:23:50.859931 containerd[1585]: 2025-05-27 18:23:50.556 [INFO][3988] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 18:23:50.859931 containerd[1585]: 2025-05-27 18:23:50.596 [INFO][3988] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-whisker--5f7dd7cb5--258cr-eth0 whisker-5f7dd7cb5- calico-system 107d1d0f-9dd5-43de-81cb-0f8c43731395 930 0 2025-05-27 18:23:50 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5f7dd7cb5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4344-0-0-3-6dd1c807ec.novalocal whisker-5f7dd7cb5-258cr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8f1bd367375 [] [] }} ContainerID="7a139618ba685d17ce1fb937f1d3d4454467f2b6902ed0c60ef05aed00ff03b4" Namespace="calico-system" Pod="whisker-5f7dd7cb5-258cr" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-whisker--5f7dd7cb5--258cr-" May 27 18:23:50.859931 containerd[1585]: 2025-05-27 18:23:50.596 [INFO][3988] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7a139618ba685d17ce1fb937f1d3d4454467f2b6902ed0c60ef05aed00ff03b4" Namespace="calico-system" Pod="whisker-5f7dd7cb5-258cr" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-whisker--5f7dd7cb5--258cr-eth0" May 27 18:23:50.859931 containerd[1585]: 2025-05-27 18:23:50.716 [INFO][4035] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a139618ba685d17ce1fb937f1d3d4454467f2b6902ed0c60ef05aed00ff03b4" HandleID="k8s-pod-network.7a139618ba685d17ce1fb937f1d3d4454467f2b6902ed0c60ef05aed00ff03b4" Workload="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-whisker--5f7dd7cb5--258cr-eth0" May 27 18:23:50.859931 containerd[1585]: 2025-05-27 18:23:50.716 [INFO][4035] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7a139618ba685d17ce1fb937f1d3d4454467f2b6902ed0c60ef05aed00ff03b4" HandleID="k8s-pod-network.7a139618ba685d17ce1fb937f1d3d4454467f2b6902ed0c60ef05aed00ff03b4" Workload="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-whisker--5f7dd7cb5--258cr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d3640), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344-0-0-3-6dd1c807ec.novalocal", "pod":"whisker-5f7dd7cb5-258cr", "timestamp":"2025-05-27 18:23:50.716363936 +0000 UTC"}, Hostname:"ci-4344-0-0-3-6dd1c807ec.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 18:23:50.859931 containerd[1585]: 2025-05-27 18:23:50.716 [INFO][4035] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 18:23:50.859931 containerd[1585]: 2025-05-27 18:23:50.716 [INFO][4035] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 18:23:50.859931 containerd[1585]: 2025-05-27 18:23:50.720 [INFO][4035] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-0-0-3-6dd1c807ec.novalocal' May 27 18:23:50.859931 containerd[1585]: 2025-05-27 18:23:50.746 [INFO][4035] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7a139618ba685d17ce1fb937f1d3d4454467f2b6902ed0c60ef05aed00ff03b4" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:50.859931 containerd[1585]: 2025-05-27 18:23:50.760 [INFO][4035] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:50.859931 containerd[1585]: 2025-05-27 18:23:50.773 [INFO][4035] ipam/ipam.go 511: Trying affinity for 192.168.123.64/26 host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:50.859931 containerd[1585]: 2025-05-27 18:23:50.777 [INFO][4035] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.64/26 host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:50.859931 containerd[1585]: 2025-05-27 18:23:50.782 [INFO][4035] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.64/26 host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:50.859931 containerd[1585]: 2025-05-27 18:23:50.782 [INFO][4035] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.64/26 handle="k8s-pod-network.7a139618ba685d17ce1fb937f1d3d4454467f2b6902ed0c60ef05aed00ff03b4" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:50.859931 containerd[1585]: 2025-05-27 18:23:50.786 [INFO][4035] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7a139618ba685d17ce1fb937f1d3d4454467f2b6902ed0c60ef05aed00ff03b4 May 27 18:23:50.859931 containerd[1585]: 2025-05-27 18:23:50.797 [INFO][4035] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.64/26 handle="k8s-pod-network.7a139618ba685d17ce1fb937f1d3d4454467f2b6902ed0c60ef05aed00ff03b4" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:50.859931 containerd[1585]: 2025-05-27 18:23:50.806 [INFO][4035] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.67/26] block=192.168.123.64/26 handle="k8s-pod-network.7a139618ba685d17ce1fb937f1d3d4454467f2b6902ed0c60ef05aed00ff03b4" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:50.859931 containerd[1585]: 2025-05-27 18:23:50.806 [INFO][4035] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.67/26] handle="k8s-pod-network.7a139618ba685d17ce1fb937f1d3d4454467f2b6902ed0c60ef05aed00ff03b4" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:50.859931 containerd[1585]: 2025-05-27 18:23:50.806 [INFO][4035] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 18:23:50.859931 containerd[1585]: 2025-05-27 18:23:50.806 [INFO][4035] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.67/26] IPv6=[] ContainerID="7a139618ba685d17ce1fb937f1d3d4454467f2b6902ed0c60ef05aed00ff03b4" HandleID="k8s-pod-network.7a139618ba685d17ce1fb937f1d3d4454467f2b6902ed0c60ef05aed00ff03b4" Workload="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-whisker--5f7dd7cb5--258cr-eth0" May 27 18:23:50.861889 containerd[1585]: 2025-05-27 18:23:50.809 [INFO][3988] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7a139618ba685d17ce1fb937f1d3d4454467f2b6902ed0c60ef05aed00ff03b4" Namespace="calico-system" Pod="whisker-5f7dd7cb5-258cr" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-whisker--5f7dd7cb5--258cr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-whisker--5f7dd7cb5--258cr-eth0", GenerateName:"whisker-5f7dd7cb5-", Namespace:"calico-system", SelfLink:"", UID:"107d1d0f-9dd5-43de-81cb-0f8c43731395", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 23, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f7dd7cb5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-3-6dd1c807ec.novalocal", ContainerID:"", Pod:"whisker-5f7dd7cb5-258cr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.123.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8f1bd367375", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:23:50.861889 containerd[1585]: 2025-05-27 18:23:50.809 [INFO][3988] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.67/32] ContainerID="7a139618ba685d17ce1fb937f1d3d4454467f2b6902ed0c60ef05aed00ff03b4" Namespace="calico-system" Pod="whisker-5f7dd7cb5-258cr" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-whisker--5f7dd7cb5--258cr-eth0" May 27 18:23:50.861889 containerd[1585]: 2025-05-27 18:23:50.809 [INFO][3988] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f1bd367375 ContainerID="7a139618ba685d17ce1fb937f1d3d4454467f2b6902ed0c60ef05aed00ff03b4" Namespace="calico-system" Pod="whisker-5f7dd7cb5-258cr" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-whisker--5f7dd7cb5--258cr-eth0" May 27 18:23:50.861889 containerd[1585]: 2025-05-27 18:23:50.822 [INFO][3988] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a139618ba685d17ce1fb937f1d3d4454467f2b6902ed0c60ef05aed00ff03b4" Namespace="calico-system" Pod="whisker-5f7dd7cb5-258cr" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-whisker--5f7dd7cb5--258cr-eth0" May 27 18:23:50.861889 containerd[1585]: 2025-05-27 18:23:50.823 [INFO][3988] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7a139618ba685d17ce1fb937f1d3d4454467f2b6902ed0c60ef05aed00ff03b4" Namespace="calico-system" Pod="whisker-5f7dd7cb5-258cr" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-whisker--5f7dd7cb5--258cr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-whisker--5f7dd7cb5--258cr-eth0", GenerateName:"whisker-5f7dd7cb5-", Namespace:"calico-system", SelfLink:"", UID:"107d1d0f-9dd5-43de-81cb-0f8c43731395", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 23, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f7dd7cb5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-3-6dd1c807ec.novalocal", ContainerID:"7a139618ba685d17ce1fb937f1d3d4454467f2b6902ed0c60ef05aed00ff03b4", Pod:"whisker-5f7dd7cb5-258cr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.123.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8f1bd367375", MAC:"b6:64:22:1e:5d:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:23:50.861889 containerd[1585]: 2025-05-27 18:23:50.852 [INFO][3988] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7a139618ba685d17ce1fb937f1d3d4454467f2b6902ed0c60ef05aed00ff03b4" Namespace="calico-system" Pod="whisker-5f7dd7cb5-258cr" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-whisker--5f7dd7cb5--258cr-eth0" May 27 18:23:50.962887 kubelet[2794]: I0527 18:23:50.961610 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-rg64c" podStartSLOduration=43.961530749 podStartE2EDuration="43.961530749s" podCreationTimestamp="2025-05-27 18:23:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 18:23:50.961113513 +0000 UTC m=+49.588866568" watchObservedRunningTime="2025-05-27 18:23:50.961530749 +0000 UTC m=+49.589283804" May 27 18:23:51.004921 containerd[1585]: time="2025-05-27T18:23:51.004766683Z" level=info msg="connecting to shim 7a139618ba685d17ce1fb937f1d3d4454467f2b6902ed0c60ef05aed00ff03b4" address="unix:///run/containerd/s/9bacdbff4d5f36e9c32373c377261f341bb9568207e2f1894c78926d16d596d4" namespace=k8s.io protocol=ttrpc version=3 May 27 18:23:51.079173 systemd[1]: Started cri-containerd-7a139618ba685d17ce1fb937f1d3d4454467f2b6902ed0c60ef05aed00ff03b4.scope - libcontainer container 7a139618ba685d17ce1fb937f1d3d4454467f2b6902ed0c60ef05aed00ff03b4. May 27 18:23:51.316044 containerd[1585]: time="2025-05-27T18:23:51.315969307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f7dd7cb5-258cr,Uid:107d1d0f-9dd5-43de-81cb-0f8c43731395,Namespace:calico-system,Attempt:0,} returns sandbox id \"7a139618ba685d17ce1fb937f1d3d4454467f2b6902ed0c60ef05aed00ff03b4\"" May 27 18:23:51.420052 containerd[1585]: time="2025-05-27T18:23:51.419810011Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa387d247c6751a9fd3e4bb2659b7f075b4326082ca83aebfaca596f8e54d0a6\" id:\"88aae763a6074d41307be150f1e1609badeab1f1f38246e9428414827ff98333\" pid:3888 exit_status:1 exited_at:{seconds:1748370231 nanos:416877941}" May 27 18:23:51.541664 containerd[1585]: time="2025-05-27T18:23:51.541610746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vbq8l,Uid:9ce9100c-285e-419f-a320-34fbd743d450,Namespace:kube-system,Attempt:0,}" May 27 18:23:51.548742 kubelet[2794]: I0527 18:23:51.545393 2794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d213ce18-dfff-40dc-8a57-832e1856060d" path="/var/lib/kubelet/pods/d213ce18-dfff-40dc-8a57-832e1856060d/volumes" May 27 18:23:51.650918 systemd-networkd[1457]: cali464d9420987: Gained IPv6LL May 27 18:23:51.892285 systemd-networkd[1457]: cali016f3d732f1: Link UP May 27 18:23:51.894041 systemd-networkd[1457]: cali016f3d732f1: Gained carrier May 27 18:23:51.934643 containerd[1585]: 2025-05-27 18:23:51.659 [INFO][4204] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 18:23:51.934643 containerd[1585]: 2025-05-27 18:23:51.700 [INFO][4204] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-coredns--7c65d6cfc9--vbq8l-eth0 coredns-7c65d6cfc9- kube-system 9ce9100c-285e-419f-a320-34fbd743d450 839 0 2025-05-27 18:23:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344-0-0-3-6dd1c807ec.novalocal coredns-7c65d6cfc9-vbq8l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali016f3d732f1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="96e4262da3ab3f29203f31f75e081e6b7b76f7a9ceb1bf8848a42eaac4d63409" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vbq8l" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-coredns--7c65d6cfc9--vbq8l-" May 27 18:23:51.934643 containerd[1585]: 2025-05-27 18:23:51.700 [INFO][4204] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="96e4262da3ab3f29203f31f75e081e6b7b76f7a9ceb1bf8848a42eaac4d63409" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vbq8l" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-coredns--7c65d6cfc9--vbq8l-eth0" May 27 18:23:51.934643 containerd[1585]: 2025-05-27 18:23:51.799 [INFO][4221] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96e4262da3ab3f29203f31f75e081e6b7b76f7a9ceb1bf8848a42eaac4d63409" HandleID="k8s-pod-network.96e4262da3ab3f29203f31f75e081e6b7b76f7a9ceb1bf8848a42eaac4d63409" Workload="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-coredns--7c65d6cfc9--vbq8l-eth0" May 27 18:23:51.934643 containerd[1585]: 2025-05-27 18:23:51.799 [INFO][4221] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="96e4262da3ab3f29203f31f75e081e6b7b76f7a9ceb1bf8848a42eaac4d63409" HandleID="k8s-pod-network.96e4262da3ab3f29203f31f75e081e6b7b76f7a9ceb1bf8848a42eaac4d63409" Workload="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-coredns--7c65d6cfc9--vbq8l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000352dc0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344-0-0-3-6dd1c807ec.novalocal", "pod":"coredns-7c65d6cfc9-vbq8l", "timestamp":"2025-05-27 18:23:51.799491763 +0000 UTC"}, Hostname:"ci-4344-0-0-3-6dd1c807ec.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 18:23:51.934643 containerd[1585]: 2025-05-27 18:23:51.800 [INFO][4221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 18:23:51.934643 containerd[1585]: 2025-05-27 18:23:51.800 [INFO][4221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 18:23:51.934643 containerd[1585]: 2025-05-27 18:23:51.800 [INFO][4221] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-0-0-3-6dd1c807ec.novalocal' May 27 18:23:51.934643 containerd[1585]: 2025-05-27 18:23:51.828 [INFO][4221] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.96e4262da3ab3f29203f31f75e081e6b7b76f7a9ceb1bf8848a42eaac4d63409" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:51.934643 containerd[1585]: 2025-05-27 18:23:51.838 [INFO][4221] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:51.934643 containerd[1585]: 2025-05-27 18:23:51.852 [INFO][4221] ipam/ipam.go 511: Trying affinity for 192.168.123.64/26 host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:51.934643 containerd[1585]: 2025-05-27 18:23:51.855 [INFO][4221] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.64/26 host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:51.934643 containerd[1585]: 2025-05-27 18:23:51.859 [INFO][4221] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.64/26 host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:51.934643 containerd[1585]: 2025-05-27 18:23:51.859 [INFO][4221] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.64/26 handle="k8s-pod-network.96e4262da3ab3f29203f31f75e081e6b7b76f7a9ceb1bf8848a42eaac4d63409" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:51.934643 containerd[1585]: 2025-05-27 18:23:51.865 [INFO][4221] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.96e4262da3ab3f29203f31f75e081e6b7b76f7a9ceb1bf8848a42eaac4d63409 May 27 18:23:51.934643 containerd[1585]: 2025-05-27 18:23:51.871 [INFO][4221] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.64/26 handle="k8s-pod-network.96e4262da3ab3f29203f31f75e081e6b7b76f7a9ceb1bf8848a42eaac4d63409" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:51.934643 containerd[1585]: 2025-05-27 18:23:51.881 [INFO][4221] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.68/26] block=192.168.123.64/26 handle="k8s-pod-network.96e4262da3ab3f29203f31f75e081e6b7b76f7a9ceb1bf8848a42eaac4d63409" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:51.934643 containerd[1585]: 2025-05-27 18:23:51.881 [INFO][4221] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.68/26] handle="k8s-pod-network.96e4262da3ab3f29203f31f75e081e6b7b76f7a9ceb1bf8848a42eaac4d63409" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:51.934643 containerd[1585]: 2025-05-27 18:23:51.883 [INFO][4221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 18:23:51.934643 containerd[1585]: 2025-05-27 18:23:51.883 [INFO][4221] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.68/26] IPv6=[] ContainerID="96e4262da3ab3f29203f31f75e081e6b7b76f7a9ceb1bf8848a42eaac4d63409" HandleID="k8s-pod-network.96e4262da3ab3f29203f31f75e081e6b7b76f7a9ceb1bf8848a42eaac4d63409" Workload="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-coredns--7c65d6cfc9--vbq8l-eth0" May 27 18:23:51.939612 containerd[1585]: 2025-05-27 18:23:51.888 [INFO][4204] cni-plugin/k8s.go 418: Populated endpoint ContainerID="96e4262da3ab3f29203f31f75e081e6b7b76f7a9ceb1bf8848a42eaac4d63409" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vbq8l" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-coredns--7c65d6cfc9--vbq8l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-coredns--7c65d6cfc9--vbq8l-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9ce9100c-285e-419f-a320-34fbd743d450", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 23, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-3-6dd1c807ec.novalocal", ContainerID:"", Pod:"coredns-7c65d6cfc9-vbq8l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali016f3d732f1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:23:51.939612 containerd[1585]: 2025-05-27 18:23:51.889 [INFO][4204] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.68/32] ContainerID="96e4262da3ab3f29203f31f75e081e6b7b76f7a9ceb1bf8848a42eaac4d63409" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vbq8l" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-coredns--7c65d6cfc9--vbq8l-eth0" May 27 18:23:51.939612 containerd[1585]: 2025-05-27 18:23:51.889 [INFO][4204] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali016f3d732f1 ContainerID="96e4262da3ab3f29203f31f75e081e6b7b76f7a9ceb1bf8848a42eaac4d63409" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vbq8l" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-coredns--7c65d6cfc9--vbq8l-eth0" May 27 18:23:51.939612 containerd[1585]: 2025-05-27 18:23:51.894 [INFO][4204] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96e4262da3ab3f29203f31f75e081e6b7b76f7a9ceb1bf8848a42eaac4d63409" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vbq8l" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-coredns--7c65d6cfc9--vbq8l-eth0" May 27 18:23:51.939612 containerd[1585]: 2025-05-27 18:23:51.896 [INFO][4204] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="96e4262da3ab3f29203f31f75e081e6b7b76f7a9ceb1bf8848a42eaac4d63409" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vbq8l" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-coredns--7c65d6cfc9--vbq8l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-coredns--7c65d6cfc9--vbq8l-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9ce9100c-285e-419f-a320-34fbd743d450", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 23, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-3-6dd1c807ec.novalocal", ContainerID:"96e4262da3ab3f29203f31f75e081e6b7b76f7a9ceb1bf8848a42eaac4d63409", Pod:"coredns-7c65d6cfc9-vbq8l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali016f3d732f1", MAC:"e6:10:de:39:57:c7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:23:51.939612 containerd[1585]: 2025-05-27 18:23:51.926 [INFO][4204] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="96e4262da3ab3f29203f31f75e081e6b7b76f7a9ceb1bf8848a42eaac4d63409" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vbq8l" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-coredns--7c65d6cfc9--vbq8l-eth0" May 27 18:23:52.033473 containerd[1585]: time="2025-05-27T18:23:52.033347016Z" level=info msg="connecting to shim 96e4262da3ab3f29203f31f75e081e6b7b76f7a9ceb1bf8848a42eaac4d63409" address="unix:///run/containerd/s/7e0b29ea7bf834378420c0c8c15330bcae8f3ea0a84e8185eea43d51e6b5d52b" namespace=k8s.io protocol=ttrpc version=3 May 27 18:23:52.035060 systemd-networkd[1457]: calia453a0a031a: Gained IPv6LL May 27 18:23:52.035844 systemd-networkd[1457]: cali8f1bd367375: Gained IPv6LL May 27 18:23:52.094612 systemd[1]: Started cri-containerd-96e4262da3ab3f29203f31f75e081e6b7b76f7a9ceb1bf8848a42eaac4d63409.scope - libcontainer container 96e4262da3ab3f29203f31f75e081e6b7b76f7a9ceb1bf8848a42eaac4d63409. May 27 18:23:52.206913 containerd[1585]: time="2025-05-27T18:23:52.206643180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vbq8l,Uid:9ce9100c-285e-419f-a320-34fbd743d450,Namespace:kube-system,Attempt:0,} returns sandbox id \"96e4262da3ab3f29203f31f75e081e6b7b76f7a9ceb1bf8848a42eaac4d63409\"" May 27 18:23:52.216894 containerd[1585]: time="2025-05-27T18:23:52.216583247Z" level=info msg="CreateContainer within sandbox \"96e4262da3ab3f29203f31f75e081e6b7b76f7a9ceb1bf8848a42eaac4d63409\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 18:23:52.247950 containerd[1585]: time="2025-05-27T18:23:52.246866223Z" level=info msg="Container 74f8b08bddbe8a2da009819735cd033c18cc082e6366f5a0c3105dbbb6ca2772: CDI devices from CRI Config.CDIDevices: []" May 27 18:23:52.264477 containerd[1585]: time="2025-05-27T18:23:52.264429593Z" level=info msg="CreateContainer within sandbox \"96e4262da3ab3f29203f31f75e081e6b7b76f7a9ceb1bf8848a42eaac4d63409\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"74f8b08bddbe8a2da009819735cd033c18cc082e6366f5a0c3105dbbb6ca2772\"" May 27 18:23:52.265654 containerd[1585]: time="2025-05-27T18:23:52.265606297Z" level=info msg="StartContainer for \"74f8b08bddbe8a2da009819735cd033c18cc082e6366f5a0c3105dbbb6ca2772\"" May 27 18:23:52.266719 containerd[1585]: time="2025-05-27T18:23:52.266559172Z" level=info msg="connecting to shim 74f8b08bddbe8a2da009819735cd033c18cc082e6366f5a0c3105dbbb6ca2772" address="unix:///run/containerd/s/7e0b29ea7bf834378420c0c8c15330bcae8f3ea0a84e8185eea43d51e6b5d52b" protocol=ttrpc version=3 May 27 18:23:52.299890 systemd[1]: Started cri-containerd-74f8b08bddbe8a2da009819735cd033c18cc082e6366f5a0c3105dbbb6ca2772.scope - libcontainer container 74f8b08bddbe8a2da009819735cd033c18cc082e6366f5a0c3105dbbb6ca2772. May 27 18:23:52.348497 containerd[1585]: time="2025-05-27T18:23:52.348450076Z" level=info msg="StartContainer for \"74f8b08bddbe8a2da009819735cd033c18cc082e6366f5a0c3105dbbb6ca2772\" returns successfully" May 27 18:23:52.375308 containerd[1585]: time="2025-05-27T18:23:52.375061080Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa387d247c6751a9fd3e4bb2659b7f075b4326082ca83aebfaca596f8e54d0a6\" id:\"362e6390b547decb6d76ed1f4e299d422d4f38d26d23891c69dae3318f47d836\" pid:4199 exit_status:1 exited_at:{seconds:1748370232 nanos:373782894}" May 27 18:23:52.455211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2514637038.mount: Deactivated successfully. May 27 18:23:52.867198 systemd-networkd[1457]: vxlan.calico: Link UP May 27 18:23:52.867209 systemd-networkd[1457]: vxlan.calico: Gained carrier May 27 18:23:52.996034 kubelet[2794]: I0527 18:23:52.995917 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-vbq8l" podStartSLOduration=45.995892976 podStartE2EDuration="45.995892976s" podCreationTimestamp="2025-05-27 18:23:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 18:23:52.992196234 +0000 UTC m=+51.619949289" watchObservedRunningTime="2025-05-27 18:23:52.995892976 +0000 UTC m=+51.623646031" May 27 18:23:53.378846 systemd-networkd[1457]: cali016f3d732f1: Gained IPv6LL May 27 18:23:53.540415 containerd[1585]: time="2025-05-27T18:23:53.539524941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dfc469868-vsvgq,Uid:bc6463ca-f58d-434c-b3ea-ef33c2ab1e13,Namespace:calico-apiserver,Attempt:0,}" May 27 18:23:53.558050 containerd[1585]: time="2025-05-27T18:23:53.557982112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dfc469868-jbh4w,Uid:816977dc-b340-4030-b8c8-4987b44f32c4,Namespace:calico-apiserver,Attempt:0,}" May 27 18:23:53.559385 containerd[1585]: time="2025-05-27T18:23:53.559207671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rmtkq,Uid:5f8d96ad-baa3-44d5-afb2-3ebebf965cf1,Namespace:calico-system,Attempt:0,}" May 27 18:23:53.561892 containerd[1585]: time="2025-05-27T18:23:53.561774919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-pdb49,Uid:bbd4fd77-1837-421d-a4cd-17332246cc0a,Namespace:calico-system,Attempt:0,}" May 27 18:23:53.928935 systemd-networkd[1457]: cali46379b7951e: Link UP May 27 18:23:53.938029 systemd-networkd[1457]: cali46379b7951e: Gained carrier May 27 18:23:53.975367 containerd[1585]: 2025-05-27 18:23:53.714 [INFO][4441] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--apiserver--6dfc469868--jbh4w-eth0 calico-apiserver-6dfc469868- calico-apiserver 816977dc-b340-4030-b8c8-4987b44f32c4 848 0 2025-05-27 18:23:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6dfc469868 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344-0-0-3-6dd1c807ec.novalocal calico-apiserver-6dfc469868-jbh4w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali46379b7951e [] [] }} ContainerID="884ef37ac50e561d1e73bd661b9ceea5ef70eae8feeacab07d0e29e36fe5ad73" Namespace="calico-apiserver" Pod="calico-apiserver-6dfc469868-jbh4w" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--apiserver--6dfc469868--jbh4w-" May 27 18:23:53.975367 containerd[1585]: 2025-05-27 18:23:53.714 [INFO][4441] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="884ef37ac50e561d1e73bd661b9ceea5ef70eae8feeacab07d0e29e36fe5ad73" Namespace="calico-apiserver" Pod="calico-apiserver-6dfc469868-jbh4w" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--apiserver--6dfc469868--jbh4w-eth0" May 27 18:23:53.975367 containerd[1585]: 2025-05-27 18:23:53.820 [INFO][4479] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="884ef37ac50e561d1e73bd661b9ceea5ef70eae8feeacab07d0e29e36fe5ad73" HandleID="k8s-pod-network.884ef37ac50e561d1e73bd661b9ceea5ef70eae8feeacab07d0e29e36fe5ad73" Workload="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--apiserver--6dfc469868--jbh4w-eth0" May 27 18:23:53.975367 containerd[1585]: 2025-05-27 18:23:53.821 [INFO][4479] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="884ef37ac50e561d1e73bd661b9ceea5ef70eae8feeacab07d0e29e36fe5ad73" HandleID="k8s-pod-network.884ef37ac50e561d1e73bd661b9ceea5ef70eae8feeacab07d0e29e36fe5ad73" Workload="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--apiserver--6dfc469868--jbh4w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9bc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344-0-0-3-6dd1c807ec.novalocal", "pod":"calico-apiserver-6dfc469868-jbh4w", "timestamp":"2025-05-27 18:23:53.819965177 +0000 UTC"}, Hostname:"ci-4344-0-0-3-6dd1c807ec.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 18:23:53.975367 containerd[1585]: 2025-05-27 18:23:53.822 [INFO][4479] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 18:23:53.975367 containerd[1585]: 2025-05-27 18:23:53.822 [INFO][4479] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 18:23:53.975367 containerd[1585]: 2025-05-27 18:23:53.822 [INFO][4479] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-0-0-3-6dd1c807ec.novalocal' May 27 18:23:53.975367 containerd[1585]: 2025-05-27 18:23:53.839 [INFO][4479] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.884ef37ac50e561d1e73bd661b9ceea5ef70eae8feeacab07d0e29e36fe5ad73" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:53.975367 containerd[1585]: 2025-05-27 18:23:53.857 [INFO][4479] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:53.975367 containerd[1585]: 2025-05-27 18:23:53.869 [INFO][4479] ipam/ipam.go 511: Trying affinity for 192.168.123.64/26 host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:53.975367 containerd[1585]: 2025-05-27 18:23:53.874 [INFO][4479] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.64/26 host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:53.975367 containerd[1585]: 2025-05-27 18:23:53.878 [INFO][4479] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.64/26 host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:53.975367 containerd[1585]: 2025-05-27 18:23:53.878 [INFO][4479] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.64/26 handle="k8s-pod-network.884ef37ac50e561d1e73bd661b9ceea5ef70eae8feeacab07d0e29e36fe5ad73" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:53.975367 containerd[1585]: 2025-05-27 18:23:53.882 [INFO][4479] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.884ef37ac50e561d1e73bd661b9ceea5ef70eae8feeacab07d0e29e36fe5ad73 May 27 18:23:53.975367 containerd[1585]: 2025-05-27 18:23:53.892 [INFO][4479] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.64/26 handle="k8s-pod-network.884ef37ac50e561d1e73bd661b9ceea5ef70eae8feeacab07d0e29e36fe5ad73" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:53.975367 containerd[1585]: 2025-05-27 18:23:53.907 [INFO][4479] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.69/26] block=192.168.123.64/26 handle="k8s-pod-network.884ef37ac50e561d1e73bd661b9ceea5ef70eae8feeacab07d0e29e36fe5ad73" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:53.975367 containerd[1585]: 2025-05-27 18:23:53.907 [INFO][4479] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.69/26] handle="k8s-pod-network.884ef37ac50e561d1e73bd661b9ceea5ef70eae8feeacab07d0e29e36fe5ad73" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:53.975367 containerd[1585]: 2025-05-27 18:23:53.907 [INFO][4479] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 18:23:53.975367 containerd[1585]: 2025-05-27 18:23:53.907 [INFO][4479] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.69/26] IPv6=[] ContainerID="884ef37ac50e561d1e73bd661b9ceea5ef70eae8feeacab07d0e29e36fe5ad73" HandleID="k8s-pod-network.884ef37ac50e561d1e73bd661b9ceea5ef70eae8feeacab07d0e29e36fe5ad73" Workload="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--apiserver--6dfc469868--jbh4w-eth0" May 27 18:23:53.978716 containerd[1585]: 2025-05-27 18:23:53.916 [INFO][4441] cni-plugin/k8s.go 418: Populated endpoint ContainerID="884ef37ac50e561d1e73bd661b9ceea5ef70eae8feeacab07d0e29e36fe5ad73" Namespace="calico-apiserver" Pod="calico-apiserver-6dfc469868-jbh4w" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--apiserver--6dfc469868--jbh4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--apiserver--6dfc469868--jbh4w-eth0", GenerateName:"calico-apiserver-6dfc469868-", Namespace:"calico-apiserver", SelfLink:"", UID:"816977dc-b340-4030-b8c8-4987b44f32c4", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 23, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dfc469868", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-3-6dd1c807ec.novalocal", ContainerID:"", Pod:"calico-apiserver-6dfc469868-jbh4w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali46379b7951e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:23:53.978716 containerd[1585]: 2025-05-27 18:23:53.917 [INFO][4441] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.69/32] ContainerID="884ef37ac50e561d1e73bd661b9ceea5ef70eae8feeacab07d0e29e36fe5ad73" Namespace="calico-apiserver" Pod="calico-apiserver-6dfc469868-jbh4w" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--apiserver--6dfc469868--jbh4w-eth0" May 27 18:23:53.978716 containerd[1585]: 2025-05-27 18:23:53.917 [INFO][4441] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali46379b7951e ContainerID="884ef37ac50e561d1e73bd661b9ceea5ef70eae8feeacab07d0e29e36fe5ad73" Namespace="calico-apiserver" Pod="calico-apiserver-6dfc469868-jbh4w" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--apiserver--6dfc469868--jbh4w-eth0" May 27 18:23:53.978716 containerd[1585]: 2025-05-27 18:23:53.936 [INFO][4441] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="884ef37ac50e561d1e73bd661b9ceea5ef70eae8feeacab07d0e29e36fe5ad73" Namespace="calico-apiserver" Pod="calico-apiserver-6dfc469868-jbh4w" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--apiserver--6dfc469868--jbh4w-eth0" May 27 18:23:53.978716 containerd[1585]: 2025-05-27 18:23:53.937 [INFO][4441] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="884ef37ac50e561d1e73bd661b9ceea5ef70eae8feeacab07d0e29e36fe5ad73" Namespace="calico-apiserver" Pod="calico-apiserver-6dfc469868-jbh4w" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--apiserver--6dfc469868--jbh4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--apiserver--6dfc469868--jbh4w-eth0", GenerateName:"calico-apiserver-6dfc469868-", Namespace:"calico-apiserver", SelfLink:"", UID:"816977dc-b340-4030-b8c8-4987b44f32c4", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 23, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dfc469868", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-3-6dd1c807ec.novalocal", ContainerID:"884ef37ac50e561d1e73bd661b9ceea5ef70eae8feeacab07d0e29e36fe5ad73", Pod:"calico-apiserver-6dfc469868-jbh4w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali46379b7951e", MAC:"da:e7:f1:27:06:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:23:53.978716 containerd[1585]: 2025-05-27 18:23:53.965 [INFO][4441] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="884ef37ac50e561d1e73bd661b9ceea5ef70eae8feeacab07d0e29e36fe5ad73" Namespace="calico-apiserver" Pod="calico-apiserver-6dfc469868-jbh4w" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--apiserver--6dfc469868--jbh4w-eth0" May 27 18:23:54.081143 containerd[1585]: time="2025-05-27T18:23:54.081087844Z" level=info msg="connecting to shim 884ef37ac50e561d1e73bd661b9ceea5ef70eae8feeacab07d0e29e36fe5ad73" address="unix:///run/containerd/s/3f6b5f893fd31e729a2f782396c8dd5da04b82e38357a6bc39df90f91a6c24cc" namespace=k8s.io protocol=ttrpc version=3 May 27 18:23:54.092212 systemd-networkd[1457]: caliabc9c2bf6ed: Link UP May 27 18:23:54.093772 systemd-networkd[1457]: caliabc9c2bf6ed: Gained carrier May 27 18:23:54.149110 containerd[1585]: 2025-05-27 18:23:53.790 [INFO][4452] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-goldmane--8f77d7b6c--pdb49-eth0 goldmane-8f77d7b6c- calico-system bbd4fd77-1837-421d-a4cd-17332246cc0a 850 0 2025-05-27 18:23:22 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:8f77d7b6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4344-0-0-3-6dd1c807ec.novalocal goldmane-8f77d7b6c-pdb49 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliabc9c2bf6ed [] [] }} ContainerID="70981d974aa26e3cad21bfd5a74db159def57f9e9d1f0204bc9464dc8e07a9ac" Namespace="calico-system" Pod="goldmane-8f77d7b6c-pdb49" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-goldmane--8f77d7b6c--pdb49-" May 27 18:23:54.149110 containerd[1585]: 2025-05-27 18:23:53.791 [INFO][4452] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70981d974aa26e3cad21bfd5a74db159def57f9e9d1f0204bc9464dc8e07a9ac" Namespace="calico-system" Pod="goldmane-8f77d7b6c-pdb49" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-goldmane--8f77d7b6c--pdb49-eth0" May 27 18:23:54.149110 containerd[1585]: 2025-05-27 18:23:53.892 [INFO][4491] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70981d974aa26e3cad21bfd5a74db159def57f9e9d1f0204bc9464dc8e07a9ac" HandleID="k8s-pod-network.70981d974aa26e3cad21bfd5a74db159def57f9e9d1f0204bc9464dc8e07a9ac" Workload="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-goldmane--8f77d7b6c--pdb49-eth0" May 27 18:23:54.149110 containerd[1585]: 2025-05-27 18:23:53.894 [INFO][4491] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="70981d974aa26e3cad21bfd5a74db159def57f9e9d1f0204bc9464dc8e07a9ac" HandleID="k8s-pod-network.70981d974aa26e3cad21bfd5a74db159def57f9e9d1f0204bc9464dc8e07a9ac" Workload="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-goldmane--8f77d7b6c--pdb49-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032cd20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344-0-0-3-6dd1c807ec.novalocal", "pod":"goldmane-8f77d7b6c-pdb49", "timestamp":"2025-05-27 18:23:53.892932634 +0000 UTC"}, Hostname:"ci-4344-0-0-3-6dd1c807ec.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 18:23:54.149110 containerd[1585]: 2025-05-27 18:23:53.894 [INFO][4491] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 18:23:54.149110 containerd[1585]: 2025-05-27 18:23:53.907 [INFO][4491] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 18:23:54.149110 containerd[1585]: 2025-05-27 18:23:53.907 [INFO][4491] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-0-0-3-6dd1c807ec.novalocal' May 27 18:23:54.149110 containerd[1585]: 2025-05-27 18:23:53.943 [INFO][4491] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.70981d974aa26e3cad21bfd5a74db159def57f9e9d1f0204bc9464dc8e07a9ac" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:54.149110 containerd[1585]: 2025-05-27 18:23:53.959 [INFO][4491] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:54.149110 containerd[1585]: 2025-05-27 18:23:53.982 [INFO][4491] ipam/ipam.go 511: Trying affinity for 192.168.123.64/26 host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:54.149110 containerd[1585]: 2025-05-27 18:23:53.995 [INFO][4491] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.64/26 host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:54.149110 containerd[1585]: 2025-05-27 18:23:54.009 [INFO][4491] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.64/26 host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:54.149110 containerd[1585]: 2025-05-27 18:23:54.011 [INFO][4491] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.64/26 handle="k8s-pod-network.70981d974aa26e3cad21bfd5a74db159def57f9e9d1f0204bc9464dc8e07a9ac" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:54.149110 containerd[1585]: 2025-05-27 18:23:54.029 [INFO][4491] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.70981d974aa26e3cad21bfd5a74db159def57f9e9d1f0204bc9464dc8e07a9ac May 27 18:23:54.149110 containerd[1585]: 2025-05-27 18:23:54.055 [INFO][4491] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.64/26 handle="k8s-pod-network.70981d974aa26e3cad21bfd5a74db159def57f9e9d1f0204bc9464dc8e07a9ac" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:54.149110 containerd[1585]: 2025-05-27 18:23:54.073 [INFO][4491] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.70/26] block=192.168.123.64/26 handle="k8s-pod-network.70981d974aa26e3cad21bfd5a74db159def57f9e9d1f0204bc9464dc8e07a9ac" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:54.149110 containerd[1585]: 2025-05-27 18:23:54.075 [INFO][4491] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.70/26] handle="k8s-pod-network.70981d974aa26e3cad21bfd5a74db159def57f9e9d1f0204bc9464dc8e07a9ac" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:54.149110 containerd[1585]: 2025-05-27 18:23:54.075 [INFO][4491] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 18:23:54.149110 containerd[1585]: 2025-05-27 18:23:54.076 [INFO][4491] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.70/26] IPv6=[] ContainerID="70981d974aa26e3cad21bfd5a74db159def57f9e9d1f0204bc9464dc8e07a9ac" HandleID="k8s-pod-network.70981d974aa26e3cad21bfd5a74db159def57f9e9d1f0204bc9464dc8e07a9ac" Workload="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-goldmane--8f77d7b6c--pdb49-eth0" May 27 18:23:54.150988 containerd[1585]: 2025-05-27 18:23:54.082 [INFO][4452] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70981d974aa26e3cad21bfd5a74db159def57f9e9d1f0204bc9464dc8e07a9ac" Namespace="calico-system" Pod="goldmane-8f77d7b6c-pdb49" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-goldmane--8f77d7b6c--pdb49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-goldmane--8f77d7b6c--pdb49-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"bbd4fd77-1837-421d-a4cd-17332246cc0a", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 23, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-3-6dd1c807ec.novalocal", ContainerID:"", Pod:"goldmane-8f77d7b6c-pdb49", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.123.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliabc9c2bf6ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:23:54.150988 containerd[1585]: 2025-05-27 18:23:54.082 [INFO][4452] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.70/32] ContainerID="70981d974aa26e3cad21bfd5a74db159def57f9e9d1f0204bc9464dc8e07a9ac" Namespace="calico-system" Pod="goldmane-8f77d7b6c-pdb49" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-goldmane--8f77d7b6c--pdb49-eth0" May 27 18:23:54.150988 containerd[1585]: 2025-05-27 18:23:54.082 [INFO][4452] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliabc9c2bf6ed ContainerID="70981d974aa26e3cad21bfd5a74db159def57f9e9d1f0204bc9464dc8e07a9ac" Namespace="calico-system" Pod="goldmane-8f77d7b6c-pdb49" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-goldmane--8f77d7b6c--pdb49-eth0" May 27 18:23:54.150988 containerd[1585]: 2025-05-27 18:23:54.098 [INFO][4452] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70981d974aa26e3cad21bfd5a74db159def57f9e9d1f0204bc9464dc8e07a9ac" Namespace="calico-system" Pod="goldmane-8f77d7b6c-pdb49" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-goldmane--8f77d7b6c--pdb49-eth0" May 27 18:23:54.150988 containerd[1585]: 2025-05-27 18:23:54.103 [INFO][4452] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70981d974aa26e3cad21bfd5a74db159def57f9e9d1f0204bc9464dc8e07a9ac" Namespace="calico-system" Pod="goldmane-8f77d7b6c-pdb49" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-goldmane--8f77d7b6c--pdb49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-goldmane--8f77d7b6c--pdb49-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"bbd4fd77-1837-421d-a4cd-17332246cc0a", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 23, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-3-6dd1c807ec.novalocal", ContainerID:"70981d974aa26e3cad21bfd5a74db159def57f9e9d1f0204bc9464dc8e07a9ac", Pod:"goldmane-8f77d7b6c-pdb49", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.123.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliabc9c2bf6ed", MAC:"9a:1e:d9:68:44:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:23:54.150988 containerd[1585]: 2025-05-27 18:23:54.133 [INFO][4452] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70981d974aa26e3cad21bfd5a74db159def57f9e9d1f0204bc9464dc8e07a9ac" Namespace="calico-system" Pod="goldmane-8f77d7b6c-pdb49" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-goldmane--8f77d7b6c--pdb49-eth0" May 27 18:23:54.170970 systemd[1]: Started cri-containerd-884ef37ac50e561d1e73bd661b9ceea5ef70eae8feeacab07d0e29e36fe5ad73.scope - libcontainer container 884ef37ac50e561d1e73bd661b9ceea5ef70eae8feeacab07d0e29e36fe5ad73. May 27 18:23:54.232828 systemd-networkd[1457]: cali30c2a3d57fe: Link UP May 27 18:23:54.234097 systemd-networkd[1457]: cali30c2a3d57fe: Gained carrier May 27 18:23:54.268500 containerd[1585]: 2025-05-27 18:23:53.780 [INFO][4422] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--apiserver--6dfc469868--vsvgq-eth0 calico-apiserver-6dfc469868- calico-apiserver bc6463ca-f58d-434c-b3ea-ef33c2ab1e13 849 0 2025-05-27 18:23:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6dfc469868 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344-0-0-3-6dd1c807ec.novalocal calico-apiserver-6dfc469868-vsvgq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali30c2a3d57fe [] [] }} ContainerID="b94a5eb29cebda6baffb8480ef0e4dcc75265ece97f67ffe5599bc294e57f666" Namespace="calico-apiserver" Pod="calico-apiserver-6dfc469868-vsvgq" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--apiserver--6dfc469868--vsvgq-" May 27 18:23:54.268500 containerd[1585]: 2025-05-27 18:23:53.780 [INFO][4422] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b94a5eb29cebda6baffb8480ef0e4dcc75265ece97f67ffe5599bc294e57f666" Namespace="calico-apiserver" Pod="calico-apiserver-6dfc469868-vsvgq" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--apiserver--6dfc469868--vsvgq-eth0" May 27 18:23:54.268500 containerd[1585]: 2025-05-27 18:23:54.013 [INFO][4489] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b94a5eb29cebda6baffb8480ef0e4dcc75265ece97f67ffe5599bc294e57f666" HandleID="k8s-pod-network.b94a5eb29cebda6baffb8480ef0e4dcc75265ece97f67ffe5599bc294e57f666" Workload="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--apiserver--6dfc469868--vsvgq-eth0" May 27 18:23:54.268500 containerd[1585]: 2025-05-27 18:23:54.013 [INFO][4489] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b94a5eb29cebda6baffb8480ef0e4dcc75265ece97f67ffe5599bc294e57f666" HandleID="k8s-pod-network.b94a5eb29cebda6baffb8480ef0e4dcc75265ece97f67ffe5599bc294e57f666" Workload="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--apiserver--6dfc469868--vsvgq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000305690), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344-0-0-3-6dd1c807ec.novalocal", "pod":"calico-apiserver-6dfc469868-vsvgq", "timestamp":"2025-05-27 18:23:54.013715658 +0000 UTC"}, Hostname:"ci-4344-0-0-3-6dd1c807ec.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 18:23:54.268500 containerd[1585]: 2025-05-27 18:23:54.014 [INFO][4489] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 18:23:54.268500 containerd[1585]: 2025-05-27 18:23:54.076 [INFO][4489] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 18:23:54.268500 containerd[1585]: 2025-05-27 18:23:54.076 [INFO][4489] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-0-0-3-6dd1c807ec.novalocal' May 27 18:23:54.268500 containerd[1585]: 2025-05-27 18:23:54.112 [INFO][4489] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b94a5eb29cebda6baffb8480ef0e4dcc75265ece97f67ffe5599bc294e57f666" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:54.268500 containerd[1585]: 2025-05-27 18:23:54.139 [INFO][4489] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:54.268500 containerd[1585]: 2025-05-27 18:23:54.155 [INFO][4489] ipam/ipam.go 511: Trying affinity for 192.168.123.64/26 host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:54.268500 containerd[1585]: 2025-05-27 18:23:54.161 [INFO][4489] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.64/26 host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:54.268500 containerd[1585]: 2025-05-27 18:23:54.177 [INFO][4489] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.64/26 host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:54.268500 containerd[1585]: 2025-05-27 18:23:54.178 [INFO][4489] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.64/26 handle="k8s-pod-network.b94a5eb29cebda6baffb8480ef0e4dcc75265ece97f67ffe5599bc294e57f666" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:54.268500 containerd[1585]: 2025-05-27 18:23:54.185 [INFO][4489] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b94a5eb29cebda6baffb8480ef0e4dcc75265ece97f67ffe5599bc294e57f666 May 27 18:23:54.268500 containerd[1585]: 2025-05-27 18:23:54.203 [INFO][4489] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.64/26 handle="k8s-pod-network.b94a5eb29cebda6baffb8480ef0e4dcc75265ece97f67ffe5599bc294e57f666" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:54.268500 containerd[1585]: 2025-05-27 18:23:54.213 [INFO][4489] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.71/26] block=192.168.123.64/26 handle="k8s-pod-network.b94a5eb29cebda6baffb8480ef0e4dcc75265ece97f67ffe5599bc294e57f666" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:54.268500 containerd[1585]: 2025-05-27 18:23:54.214 [INFO][4489] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.71/26] handle="k8s-pod-network.b94a5eb29cebda6baffb8480ef0e4dcc75265ece97f67ffe5599bc294e57f666" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:54.268500 containerd[1585]: 2025-05-27 18:23:54.214 [INFO][4489] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 18:23:54.268500 containerd[1585]: 2025-05-27 18:23:54.214 [INFO][4489] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.71/26] IPv6=[] ContainerID="b94a5eb29cebda6baffb8480ef0e4dcc75265ece97f67ffe5599bc294e57f666" HandleID="k8s-pod-network.b94a5eb29cebda6baffb8480ef0e4dcc75265ece97f67ffe5599bc294e57f666" Workload="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--apiserver--6dfc469868--vsvgq-eth0" May 27 18:23:54.270275 containerd[1585]: 2025-05-27 18:23:54.218 [INFO][4422] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b94a5eb29cebda6baffb8480ef0e4dcc75265ece97f67ffe5599bc294e57f666" Namespace="calico-apiserver" Pod="calico-apiserver-6dfc469868-vsvgq" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--apiserver--6dfc469868--vsvgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--apiserver--6dfc469868--vsvgq-eth0", GenerateName:"calico-apiserver-6dfc469868-", Namespace:"calico-apiserver", SelfLink:"", UID:"bc6463ca-f58d-434c-b3ea-ef33c2ab1e13", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 23, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dfc469868", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-3-6dd1c807ec.novalocal", ContainerID:"", Pod:"calico-apiserver-6dfc469868-vsvgq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali30c2a3d57fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:23:54.270275 containerd[1585]: 2025-05-27 18:23:54.218 [INFO][4422] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.71/32] ContainerID="b94a5eb29cebda6baffb8480ef0e4dcc75265ece97f67ffe5599bc294e57f666" Namespace="calico-apiserver" Pod="calico-apiserver-6dfc469868-vsvgq" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--apiserver--6dfc469868--vsvgq-eth0" May 27 18:23:54.270275 containerd[1585]: 2025-05-27 18:23:54.218 [INFO][4422] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali30c2a3d57fe ContainerID="b94a5eb29cebda6baffb8480ef0e4dcc75265ece97f67ffe5599bc294e57f666" Namespace="calico-apiserver" Pod="calico-apiserver-6dfc469868-vsvgq" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--apiserver--6dfc469868--vsvgq-eth0" May 27 18:23:54.270275 containerd[1585]: 2025-05-27 18:23:54.235 [INFO][4422] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b94a5eb29cebda6baffb8480ef0e4dcc75265ece97f67ffe5599bc294e57f666" Namespace="calico-apiserver" Pod="calico-apiserver-6dfc469868-vsvgq" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--apiserver--6dfc469868--vsvgq-eth0" May 27 18:23:54.270275 containerd[1585]: 2025-05-27 18:23:54.235 [INFO][4422] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b94a5eb29cebda6baffb8480ef0e4dcc75265ece97f67ffe5599bc294e57f666" Namespace="calico-apiserver" Pod="calico-apiserver-6dfc469868-vsvgq" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--apiserver--6dfc469868--vsvgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--apiserver--6dfc469868--vsvgq-eth0", GenerateName:"calico-apiserver-6dfc469868-", Namespace:"calico-apiserver", SelfLink:"", UID:"bc6463ca-f58d-434c-b3ea-ef33c2ab1e13", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 23, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dfc469868", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-3-6dd1c807ec.novalocal", ContainerID:"b94a5eb29cebda6baffb8480ef0e4dcc75265ece97f67ffe5599bc294e57f666", Pod:"calico-apiserver-6dfc469868-vsvgq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali30c2a3d57fe", MAC:"9e:31:66:6d:e4:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:23:54.270275 containerd[1585]: 2025-05-27 18:23:54.257 [INFO][4422] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b94a5eb29cebda6baffb8480ef0e4dcc75265ece97f67ffe5599bc294e57f666" Namespace="calico-apiserver" Pod="calico-apiserver-6dfc469868-vsvgq" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-calico--apiserver--6dfc469868--vsvgq-eth0" May 27 18:23:54.295591 containerd[1585]: time="2025-05-27T18:23:54.295399280Z" level=info msg="connecting to shim 70981d974aa26e3cad21bfd5a74db159def57f9e9d1f0204bc9464dc8e07a9ac" address="unix:///run/containerd/s/ac6a38e10b75478e67bc704d9ffd6a1a56447b876810b120f63ef5a38e20399a" namespace=k8s.io protocol=ttrpc version=3 May 27 18:23:54.340653 systemd-networkd[1457]: vxlan.calico: Gained IPv6LL May 27 18:23:54.340918 systemd-networkd[1457]: calif4cf088a20b: Link UP May 27 18:23:54.348399 systemd-networkd[1457]: calif4cf088a20b: Gained carrier May 27 18:23:54.366880 containerd[1585]: time="2025-05-27T18:23:54.366820097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dfc469868-jbh4w,Uid:816977dc-b340-4030-b8c8-4987b44f32c4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"884ef37ac50e561d1e73bd661b9ceea5ef70eae8feeacab07d0e29e36fe5ad73\"" May 27 18:23:54.382963 containerd[1585]: 2025-05-27 18:23:53.949 [INFO][4446] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-csi--node--driver--rmtkq-eth0 csi-node-driver- calico-system 5f8d96ad-baa3-44d5-afb2-3ebebf965cf1 724 0 2025-05-27 18:23:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:68bf44dd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4344-0-0-3-6dd1c807ec.novalocal csi-node-driver-rmtkq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif4cf088a20b [] [] }} ContainerID="3f6effd24fcbca4c661c5a1f4e124175e7e772199be3a0eb51d9176cfaf64717" Namespace="calico-system" Pod="csi-node-driver-rmtkq" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-csi--node--driver--rmtkq-" May 27 18:23:54.382963 containerd[1585]: 2025-05-27 18:23:53.949 [INFO][4446] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3f6effd24fcbca4c661c5a1f4e124175e7e772199be3a0eb51d9176cfaf64717" Namespace="calico-system" Pod="csi-node-driver-rmtkq" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-csi--node--driver--rmtkq-eth0" May 27 18:23:54.382963 containerd[1585]: 2025-05-27 18:23:54.186 [INFO][4511] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f6effd24fcbca4c661c5a1f4e124175e7e772199be3a0eb51d9176cfaf64717" HandleID="k8s-pod-network.3f6effd24fcbca4c661c5a1f4e124175e7e772199be3a0eb51d9176cfaf64717" Workload="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-csi--node--driver--rmtkq-eth0" May 27 18:23:54.382963 containerd[1585]: 2025-05-27 18:23:54.186 [INFO][4511] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3f6effd24fcbca4c661c5a1f4e124175e7e772199be3a0eb51d9176cfaf64717" HandleID="k8s-pod-network.3f6effd24fcbca4c661c5a1f4e124175e7e772199be3a0eb51d9176cfaf64717" Workload="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-csi--node--driver--rmtkq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003991f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344-0-0-3-6dd1c807ec.novalocal", "pod":"csi-node-driver-rmtkq", "timestamp":"2025-05-27 18:23:54.18511584 +0000 UTC"}, Hostname:"ci-4344-0-0-3-6dd1c807ec.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 18:23:54.382963 containerd[1585]: 2025-05-27 18:23:54.186 [INFO][4511] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 18:23:54.382963 containerd[1585]: 2025-05-27 18:23:54.214 [INFO][4511] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 18:23:54.382963 containerd[1585]: 2025-05-27 18:23:54.214 [INFO][4511] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-0-0-3-6dd1c807ec.novalocal' May 27 18:23:54.382963 containerd[1585]: 2025-05-27 18:23:54.253 [INFO][4511] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3f6effd24fcbca4c661c5a1f4e124175e7e772199be3a0eb51d9176cfaf64717" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:54.382963 containerd[1585]: 2025-05-27 18:23:54.265 [INFO][4511] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:54.382963 containerd[1585]: 2025-05-27 18:23:54.279 [INFO][4511] ipam/ipam.go 511: Trying affinity for 192.168.123.64/26 host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:54.382963 containerd[1585]: 2025-05-27 18:23:54.283 [INFO][4511] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.64/26 host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:54.382963 containerd[1585]: 2025-05-27 18:23:54.292 [INFO][4511] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.64/26 host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:54.382963 containerd[1585]: 2025-05-27 18:23:54.293 [INFO][4511] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.64/26 handle="k8s-pod-network.3f6effd24fcbca4c661c5a1f4e124175e7e772199be3a0eb51d9176cfaf64717" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:54.382963 containerd[1585]: 2025-05-27 18:23:54.297 [INFO][4511] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3f6effd24fcbca4c661c5a1f4e124175e7e772199be3a0eb51d9176cfaf64717 May 27 18:23:54.382963 containerd[1585]: 2025-05-27 18:23:54.307 [INFO][4511] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.64/26 handle="k8s-pod-network.3f6effd24fcbca4c661c5a1f4e124175e7e772199be3a0eb51d9176cfaf64717" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:54.382963 containerd[1585]: 2025-05-27 18:23:54.330 [INFO][4511] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.72/26] block=192.168.123.64/26 handle="k8s-pod-network.3f6effd24fcbca4c661c5a1f4e124175e7e772199be3a0eb51d9176cfaf64717" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:54.382963 containerd[1585]: 2025-05-27 18:23:54.332 [INFO][4511] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.72/26] handle="k8s-pod-network.3f6effd24fcbca4c661c5a1f4e124175e7e772199be3a0eb51d9176cfaf64717" host="ci-4344-0-0-3-6dd1c807ec.novalocal" May 27 18:23:54.382963 containerd[1585]: 2025-05-27 18:23:54.332 [INFO][4511] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 18:23:54.382963 containerd[1585]: 2025-05-27 18:23:54.332 [INFO][4511] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.72/26] IPv6=[] ContainerID="3f6effd24fcbca4c661c5a1f4e124175e7e772199be3a0eb51d9176cfaf64717" HandleID="k8s-pod-network.3f6effd24fcbca4c661c5a1f4e124175e7e772199be3a0eb51d9176cfaf64717" Workload="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-csi--node--driver--rmtkq-eth0" May 27 18:23:54.384124 containerd[1585]: 2025-05-27 18:23:54.336 [INFO][4446] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3f6effd24fcbca4c661c5a1f4e124175e7e772199be3a0eb51d9176cfaf64717" Namespace="calico-system" Pod="csi-node-driver-rmtkq" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-csi--node--driver--rmtkq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-csi--node--driver--rmtkq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5f8d96ad-baa3-44d5-afb2-3ebebf965cf1", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 23, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-3-6dd1c807ec.novalocal", ContainerID:"", Pod:"csi-node-driver-rmtkq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.123.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif4cf088a20b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:23:54.384124 containerd[1585]: 2025-05-27 18:23:54.336 [INFO][4446] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.72/32] ContainerID="3f6effd24fcbca4c661c5a1f4e124175e7e772199be3a0eb51d9176cfaf64717" Namespace="calico-system" Pod="csi-node-driver-rmtkq" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-csi--node--driver--rmtkq-eth0" May 27 18:23:54.384124 containerd[1585]: 2025-05-27 18:23:54.336 [INFO][4446] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif4cf088a20b ContainerID="3f6effd24fcbca4c661c5a1f4e124175e7e772199be3a0eb51d9176cfaf64717" Namespace="calico-system" Pod="csi-node-driver-rmtkq" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-csi--node--driver--rmtkq-eth0" May 27 18:23:54.384124 containerd[1585]: 2025-05-27 18:23:54.351 [INFO][4446] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f6effd24fcbca4c661c5a1f4e124175e7e772199be3a0eb51d9176cfaf64717" Namespace="calico-system" Pod="csi-node-driver-rmtkq" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-csi--node--driver--rmtkq-eth0" May 27 18:23:54.384124 containerd[1585]: 2025-05-27 18:23:54.351 [INFO][4446] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3f6effd24fcbca4c661c5a1f4e124175e7e772199be3a0eb51d9176cfaf64717" Namespace="calico-system" Pod="csi-node-driver-rmtkq" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-csi--node--driver--rmtkq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-csi--node--driver--rmtkq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5f8d96ad-baa3-44d5-afb2-3ebebf965cf1", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 23, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-3-6dd1c807ec.novalocal", ContainerID:"3f6effd24fcbca4c661c5a1f4e124175e7e772199be3a0eb51d9176cfaf64717", Pod:"csi-node-driver-rmtkq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.123.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif4cf088a20b", MAC:"82:37:75:40:e0:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:23:54.384124 containerd[1585]: 2025-05-27 18:23:54.370 [INFO][4446] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3f6effd24fcbca4c661c5a1f4e124175e7e772199be3a0eb51d9176cfaf64717" Namespace="calico-system" Pod="csi-node-driver-rmtkq" WorkloadEndpoint="ci--4344--0--0--3--6dd1c807ec.novalocal-k8s-csi--node--driver--rmtkq-eth0" May 27 18:23:54.388945 containerd[1585]: time="2025-05-27T18:23:54.388421370Z" level=info msg="connecting to shim b94a5eb29cebda6baffb8480ef0e4dcc75265ece97f67ffe5599bc294e57f666" address="unix:///run/containerd/s/db1e99a132c1b2b71e43174359c038acd2679b973544cae760fce2ed4ed30352" namespace=k8s.io protocol=ttrpc version=3 May 27 18:23:54.421140 systemd[1]: Started cri-containerd-70981d974aa26e3cad21bfd5a74db159def57f9e9d1f0204bc9464dc8e07a9ac.scope - libcontainer container 70981d974aa26e3cad21bfd5a74db159def57f9e9d1f0204bc9464dc8e07a9ac. May 27 18:23:54.435922 systemd[1]: Started cri-containerd-b94a5eb29cebda6baffb8480ef0e4dcc75265ece97f67ffe5599bc294e57f666.scope - libcontainer container b94a5eb29cebda6baffb8480ef0e4dcc75265ece97f67ffe5599bc294e57f666. May 27 18:23:54.466929 containerd[1585]: time="2025-05-27T18:23:54.466525034Z" level=info msg="connecting to shim 3f6effd24fcbca4c661c5a1f4e124175e7e772199be3a0eb51d9176cfaf64717" address="unix:///run/containerd/s/317f76848b7fb9780e69a48121da68f27210a5fc1c5e02f6f5c28929f2746db8" namespace=k8s.io protocol=ttrpc version=3 May 27 18:23:54.522044 systemd[1]: Started cri-containerd-3f6effd24fcbca4c661c5a1f4e124175e7e772199be3a0eb51d9176cfaf64717.scope - libcontainer container 3f6effd24fcbca4c661c5a1f4e124175e7e772199be3a0eb51d9176cfaf64717. May 27 18:23:54.571992 containerd[1585]: time="2025-05-27T18:23:54.567997690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-pdb49,Uid:bbd4fd77-1837-421d-a4cd-17332246cc0a,Namespace:calico-system,Attempt:0,} returns sandbox id \"70981d974aa26e3cad21bfd5a74db159def57f9e9d1f0204bc9464dc8e07a9ac\"" May 27 18:23:54.591881 containerd[1585]: time="2025-05-27T18:23:54.591551029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dfc469868-vsvgq,Uid:bc6463ca-f58d-434c-b3ea-ef33c2ab1e13,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b94a5eb29cebda6baffb8480ef0e4dcc75265ece97f67ffe5599bc294e57f666\"" May 27 18:23:54.611086 containerd[1585]: time="2025-05-27T18:23:54.611026649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rmtkq,Uid:5f8d96ad-baa3-44d5-afb2-3ebebf965cf1,Namespace:calico-system,Attempt:0,} returns sandbox id \"3f6effd24fcbca4c661c5a1f4e124175e7e772199be3a0eb51d9176cfaf64717\"" May 27 18:23:55.235539 systemd-networkd[1457]: cali46379b7951e: Gained IPv6LL May 27 18:23:55.938974 systemd-networkd[1457]: caliabc9c2bf6ed: Gained IPv6LL May 27 18:23:56.002834 systemd-networkd[1457]: cali30c2a3d57fe: Gained IPv6LL May 27 18:23:56.323351 systemd-networkd[1457]: calif4cf088a20b: Gained IPv6LL May 27 18:23:56.782961 containerd[1585]: time="2025-05-27T18:23:56.782799472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:23:56.784496 containerd[1585]: time="2025-05-27T18:23:56.784470091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 27 18:23:56.785444 containerd[1585]: time="2025-05-27T18:23:56.785403782Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:23:56.791473 containerd[1585]: time="2025-05-27T18:23:56.791439739Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:23:56.794198 containerd[1585]: time="2025-05-27T18:23:56.792376896Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 6.094068935s" May 27 18:23:56.795808 containerd[1585]: time="2025-05-27T18:23:56.795700138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 27 18:23:56.800637 containerd[1585]: time="2025-05-27T18:23:56.799829966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 18:23:56.818762 containerd[1585]: time="2025-05-27T18:23:56.818729173Z" level=info msg="CreateContainer within sandbox \"dd41d98f15f637588afcb0b7348149f786cc56c9ac7291a637b985a644e50d5e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 27 18:23:56.835834 containerd[1585]: time="2025-05-27T18:23:56.835796791Z" level=info msg="Container 5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f: CDI devices from CRI Config.CDIDevices: []" May 27 18:23:56.839958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3104196870.mount: Deactivated successfully. May 27 18:23:56.852125 containerd[1585]: time="2025-05-27T18:23:56.852068824Z" level=info msg="CreateContainer within sandbox \"dd41d98f15f637588afcb0b7348149f786cc56c9ac7291a637b985a644e50d5e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f\"" May 27 18:23:56.855513 containerd[1585]: time="2025-05-27T18:23:56.853850663Z" level=info msg="StartContainer for \"5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f\"" May 27 18:23:56.856812 containerd[1585]: time="2025-05-27T18:23:56.856783417Z" level=info msg="connecting to shim 5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f" address="unix:///run/containerd/s/c25625de3e91f7bc73dc5be05acb89ac88f1427141ed5a3c28d5d76d08967960" protocol=ttrpc version=3 May 27 18:23:56.897002 systemd[1]: Started cri-containerd-5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f.scope - libcontainer container 5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f. May 27 18:23:56.976795 containerd[1585]: time="2025-05-27T18:23:56.976747538Z" level=info msg="StartContainer for \"5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f\" returns successfully" May 27 18:23:57.051372 kubelet[2794]: I0527 18:23:57.049889 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-59576795c9-hd85v" podStartSLOduration=27.949044402 podStartE2EDuration="34.04984332s" podCreationTimestamp="2025-05-27 18:23:23 +0000 UTC" firstStartedPulling="2025-05-27 18:23:50.69699434 +0000 UTC m=+49.324747395" lastFinishedPulling="2025-05-27 18:23:56.797793258 +0000 UTC m=+55.425546313" observedRunningTime="2025-05-27 18:23:57.045276998 +0000 UTC m=+55.673030053" watchObservedRunningTime="2025-05-27 18:23:57.04984332 +0000 UTC m=+55.677596375" May 27 18:23:57.106058 containerd[1585]: time="2025-05-27T18:23:57.105851933Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f\" id:\"db5c7cfb7949f7f8abf2a630d3219f9aa9f9d4ae9601b31dc2cdc50a2e74ada3\" pid:4794 exit_status:1 exited_at:{seconds:1748370237 nanos:103222468}" May 27 18:23:57.167143 containerd[1585]: time="2025-05-27T18:23:57.167088917Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:23:57.200301 containerd[1585]: time="2025-05-27T18:23:57.200240161Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:23:57.200581 containerd[1585]: time="2025-05-27T18:23:57.200275090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 18:23:57.201349 kubelet[2794]: E0527 18:23:57.200875 2794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 18:23:57.201349 kubelet[2794]: E0527 18:23:57.200986 2794 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 18:23:57.202090 containerd[1585]: time="2025-05-27T18:23:57.201878332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 18:23:57.206669 kubelet[2794]: E0527 18:23:57.206589 2794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7721ee7d67474ccca7a05148d0f91fc5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wqlkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f7dd7cb5-258cr_calico-system(107d1d0f-9dd5-43de-81cb-0f8c43731395): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:23:58.106483 containerd[1585]: time="2025-05-27T18:23:58.106318107Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f\" id:\"e769be23d74601456ba103b9e02841d17b1d2d2dc1487354c3a7920f956fac9f\" pid:4818 exited_at:{seconds:1748370238 nanos:104617423}" May 27 18:24:01.990877 containerd[1585]: time="2025-05-27T18:24:01.990662893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:24:01.992991 containerd[1585]: time="2025-05-27T18:24:01.992791730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 27 18:24:01.993908 containerd[1585]: time="2025-05-27T18:24:01.993879269Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:24:01.997828 containerd[1585]: time="2025-05-27T18:24:01.997786158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:24:01.998711 containerd[1585]: time="2025-05-27T18:24:01.998232194Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 4.796316067s" May 27 18:24:01.998711 containerd[1585]: time="2025-05-27T18:24:01.998266522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 18:24:01.999984 containerd[1585]: time="2025-05-27T18:24:01.999956418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 18:24:02.002382 containerd[1585]: time="2025-05-27T18:24:02.002351512Z" level=info msg="CreateContainer within sandbox \"884ef37ac50e561d1e73bd661b9ceea5ef70eae8feeacab07d0e29e36fe5ad73\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 18:24:02.025111 containerd[1585]: time="2025-05-27T18:24:02.024954894Z" level=info msg="Container bf32561476c7f87042215cfc517b3814e4ecbbf9239899609479ae43922a0153: CDI devices from CRI Config.CDIDevices: []" May 27 18:24:02.033105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2198535883.mount: Deactivated successfully. May 27 18:24:02.058417 containerd[1585]: time="2025-05-27T18:24:02.058366687Z" level=info msg="CreateContainer within sandbox \"884ef37ac50e561d1e73bd661b9ceea5ef70eae8feeacab07d0e29e36fe5ad73\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bf32561476c7f87042215cfc517b3814e4ecbbf9239899609479ae43922a0153\"" May 27 18:24:02.060878 containerd[1585]: time="2025-05-27T18:24:02.060802252Z" level=info msg="StartContainer for \"bf32561476c7f87042215cfc517b3814e4ecbbf9239899609479ae43922a0153\"" May 27 18:24:02.063970 containerd[1585]: time="2025-05-27T18:24:02.063936282Z" level=info msg="connecting to shim bf32561476c7f87042215cfc517b3814e4ecbbf9239899609479ae43922a0153" address="unix:///run/containerd/s/3f6b5f893fd31e729a2f782396c8dd5da04b82e38357a6bc39df90f91a6c24cc" protocol=ttrpc version=3 May 27 18:24:02.114872 systemd[1]: Started cri-containerd-bf32561476c7f87042215cfc517b3814e4ecbbf9239899609479ae43922a0153.scope - libcontainer container bf32561476c7f87042215cfc517b3814e4ecbbf9239899609479ae43922a0153. May 27 18:24:02.210046 containerd[1585]: time="2025-05-27T18:24:02.209950491Z" level=info msg="StartContainer for \"bf32561476c7f87042215cfc517b3814e4ecbbf9239899609479ae43922a0153\" returns successfully" May 27 18:24:02.344757 containerd[1585]: time="2025-05-27T18:24:02.344600906Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:24:02.347368 containerd[1585]: time="2025-05-27T18:24:02.347327236Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:24:02.347733 containerd[1585]: time="2025-05-27T18:24:02.347619006Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 18:24:02.348312 kubelet[2794]: E0527 18:24:02.348201 2794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 18:24:02.350282 kubelet[2794]: E0527 18:24:02.348741 2794 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 18:24:02.350348 kubelet[2794]: E0527 18:24:02.349134 2794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k62cm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-pdb49_calico-system(bbd4fd77-1837-421d-a4cd-17332246cc0a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:24:02.350900 kubelet[2794]: E0527 18:24:02.350811 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:24:02.351552 containerd[1585]: time="2025-05-27T18:24:02.351315923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 18:24:02.876278 containerd[1585]: time="2025-05-27T18:24:02.876180679Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:24:02.877806 containerd[1585]: time="2025-05-27T18:24:02.877718762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 27 18:24:02.881692 containerd[1585]: time="2025-05-27T18:24:02.881301803Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 529.355158ms" May 27 18:24:02.882028 containerd[1585]: time="2025-05-27T18:24:02.881914549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 18:24:02.884209 containerd[1585]: time="2025-05-27T18:24:02.884155817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 27 18:24:02.888399 containerd[1585]: time="2025-05-27T18:24:02.886834955Z" level=info msg="CreateContainer within sandbox \"b94a5eb29cebda6baffb8480ef0e4dcc75265ece97f67ffe5599bc294e57f666\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 18:24:02.907981 containerd[1585]: time="2025-05-27T18:24:02.907938390Z" level=info msg="Container c3ec8f8342890eeb43a92676eb933c67798162728bf80164ea3b1b7546ea7378: CDI devices from CRI Config.CDIDevices: []" May 27 18:24:02.937297 containerd[1585]: time="2025-05-27T18:24:02.937229285Z" level=info msg="CreateContainer within sandbox \"b94a5eb29cebda6baffb8480ef0e4dcc75265ece97f67ffe5599bc294e57f666\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c3ec8f8342890eeb43a92676eb933c67798162728bf80164ea3b1b7546ea7378\"" May 27 18:24:02.939589 containerd[1585]: time="2025-05-27T18:24:02.939542716Z" level=info msg="StartContainer for \"c3ec8f8342890eeb43a92676eb933c67798162728bf80164ea3b1b7546ea7378\"" May 27 18:24:02.943228 containerd[1585]: time="2025-05-27T18:24:02.943175966Z" level=info msg="connecting to shim c3ec8f8342890eeb43a92676eb933c67798162728bf80164ea3b1b7546ea7378" address="unix:///run/containerd/s/db1e99a132c1b2b71e43174359c038acd2679b973544cae760fce2ed4ed30352" protocol=ttrpc version=3 May 27 18:24:03.001183 systemd[1]: Started cri-containerd-c3ec8f8342890eeb43a92676eb933c67798162728bf80164ea3b1b7546ea7378.scope - libcontainer container c3ec8f8342890eeb43a92676eb933c67798162728bf80164ea3b1b7546ea7378. May 27 18:24:03.042285 kubelet[2794]: E0527 18:24:03.042182 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:24:03.100066 kubelet[2794]: I0527 18:24:03.099360 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6dfc469868-jbh4w" podStartSLOduration=36.478272593 podStartE2EDuration="44.099323917s" podCreationTimestamp="2025-05-27 18:23:19 +0000 UTC" firstStartedPulling="2025-05-27 18:23:54.378588916 +0000 UTC m=+53.006341971" lastFinishedPulling="2025-05-27 18:24:01.99964023 +0000 UTC m=+60.627393295" observedRunningTime="2025-05-27 18:24:03.074574021 +0000 UTC m=+61.702327086" watchObservedRunningTime="2025-05-27 18:24:03.099323917 +0000 UTC m=+61.727076972" May 27 18:24:03.185779 containerd[1585]: time="2025-05-27T18:24:03.185518411Z" level=info msg="StartContainer for \"c3ec8f8342890eeb43a92676eb933c67798162728bf80164ea3b1b7546ea7378\" returns successfully" May 27 18:24:04.048831 kubelet[2794]: I0527 18:24:04.048791 2794 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 18:24:04.742986 kubelet[2794]: I0527 18:24:04.740476 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6dfc469868-vsvgq" podStartSLOduration=37.450792355 podStartE2EDuration="45.740452853s" podCreationTimestamp="2025-05-27 18:23:19 +0000 UTC" firstStartedPulling="2025-05-27 18:23:54.593982562 +0000 UTC m=+53.221735617" lastFinishedPulling="2025-05-27 18:24:02.88364306 +0000 UTC m=+61.511396115" observedRunningTime="2025-05-27 18:24:04.075574504 +0000 UTC m=+62.703327559" watchObservedRunningTime="2025-05-27 18:24:04.740452853 +0000 UTC m=+63.368205919" May 27 18:24:05.780498 containerd[1585]: time="2025-05-27T18:24:05.780437286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:24:05.782785 containerd[1585]: time="2025-05-27T18:24:05.782413022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 27 18:24:05.784769 containerd[1585]: time="2025-05-27T18:24:05.784657620Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:24:05.789424 containerd[1585]: time="2025-05-27T18:24:05.789363105Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:24:05.790351 containerd[1585]: time="2025-05-27T18:24:05.790311034Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 2.906117162s" May 27 18:24:05.790459 containerd[1585]: time="2025-05-27T18:24:05.790439649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 27 18:24:05.793179 containerd[1585]: time="2025-05-27T18:24:05.793149791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 18:24:05.798947 containerd[1585]: time="2025-05-27T18:24:05.798909445Z" level=info msg="CreateContainer within sandbox \"3f6effd24fcbca4c661c5a1f4e124175e7e772199be3a0eb51d9176cfaf64717\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 27 18:24:05.821543 containerd[1585]: time="2025-05-27T18:24:05.819874655Z" level=info msg="Container 9c66925581d668555330d65a60e48a1bea14dc14f96806d0c13660d295fcfb96: CDI devices from CRI Config.CDIDevices: []" May 27 18:24:05.850721 containerd[1585]: time="2025-05-27T18:24:05.850654926Z" level=info msg="CreateContainer within sandbox \"3f6effd24fcbca4c661c5a1f4e124175e7e772199be3a0eb51d9176cfaf64717\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9c66925581d668555330d65a60e48a1bea14dc14f96806d0c13660d295fcfb96\"" May 27 18:24:05.852945 containerd[1585]: time="2025-05-27T18:24:05.852919253Z" level=info msg="StartContainer for \"9c66925581d668555330d65a60e48a1bea14dc14f96806d0c13660d295fcfb96\"" May 27 18:24:05.857508 containerd[1585]: time="2025-05-27T18:24:05.857467808Z" level=info msg="connecting to shim 9c66925581d668555330d65a60e48a1bea14dc14f96806d0c13660d295fcfb96" address="unix:///run/containerd/s/317f76848b7fb9780e69a48121da68f27210a5fc1c5e02f6f5c28929f2746db8" protocol=ttrpc version=3 May 27 18:24:05.936894 systemd[1]: Started cri-containerd-9c66925581d668555330d65a60e48a1bea14dc14f96806d0c13660d295fcfb96.scope - libcontainer container 9c66925581d668555330d65a60e48a1bea14dc14f96806d0c13660d295fcfb96. May 27 18:24:06.128621 containerd[1585]: time="2025-05-27T18:24:06.127803210Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:24:06.130636 containerd[1585]: time="2025-05-27T18:24:06.130594820Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:24:06.132077 containerd[1585]: time="2025-05-27T18:24:06.130813584Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 18:24:06.132147 kubelet[2794]: E0527 18:24:06.131241 2794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 18:24:06.132147 kubelet[2794]: E0527 18:24:06.131302 2794 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 18:24:06.132147 kubelet[2794]: E0527 18:24:06.131464 2794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqlkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f7dd7cb5-258cr_calico-system(107d1d0f-9dd5-43de-81cb-0f8c43731395): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:24:06.132926 kubelet[2794]: E0527 18:24:06.132854 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:24:06.198450 containerd[1585]: time="2025-05-27T18:24:06.198323680Z" level=info msg="StartContainer for \"9c66925581d668555330d65a60e48a1bea14dc14f96806d0c13660d295fcfb96\" returns successfully" May 27 18:24:06.200477 containerd[1585]: time="2025-05-27T18:24:06.200432869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 27 18:24:07.093359 kubelet[2794]: E0527 18:24:07.093281 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:24:07.350288 kubelet[2794]: I0527 18:24:07.349228 2794 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 18:24:08.252750 containerd[1585]: time="2025-05-27T18:24:08.252628894Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f\" id:\"590994cc22b7a5107af6cd53e9ce0a7d3894c83b921c9ff3e86957b7055eb1ca\" pid:4987 exited_at:{seconds:1748370248 nanos:252091381}" May 27 18:24:08.943750 containerd[1585]: time="2025-05-27T18:24:08.943701919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:24:08.944908 containerd[1585]: time="2025-05-27T18:24:08.944858599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 27 18:24:08.946571 containerd[1585]: time="2025-05-27T18:24:08.946511508Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:24:08.949469 containerd[1585]: time="2025-05-27T18:24:08.949423409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:24:08.950225 containerd[1585]: time="2025-05-27T18:24:08.950184601Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 2.749722607s" May 27 18:24:08.950290 containerd[1585]: time="2025-05-27T18:24:08.950225855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 27 18:24:08.953996 containerd[1585]: time="2025-05-27T18:24:08.953948859Z" level=info msg="CreateContainer within sandbox \"3f6effd24fcbca4c661c5a1f4e124175e7e772199be3a0eb51d9176cfaf64717\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 27 18:24:08.970120 containerd[1585]: time="2025-05-27T18:24:08.968818361Z" level=info msg="Container bd36727705ded3886de863822826b236735a1b07f25bc65d1c92961a7a6b0094: CDI devices from CRI Config.CDIDevices: []" May 27 18:24:08.982402 containerd[1585]: time="2025-05-27T18:24:08.982293319Z" level=info msg="CreateContainer within sandbox \"3f6effd24fcbca4c661c5a1f4e124175e7e772199be3a0eb51d9176cfaf64717\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"bd36727705ded3886de863822826b236735a1b07f25bc65d1c92961a7a6b0094\"" May 27 18:24:08.983223 containerd[1585]: time="2025-05-27T18:24:08.983169497Z" level=info msg="StartContainer for \"bd36727705ded3886de863822826b236735a1b07f25bc65d1c92961a7a6b0094\"" May 27 18:24:08.986979 containerd[1585]: time="2025-05-27T18:24:08.986933683Z" level=info msg="connecting to shim bd36727705ded3886de863822826b236735a1b07f25bc65d1c92961a7a6b0094" address="unix:///run/containerd/s/317f76848b7fb9780e69a48121da68f27210a5fc1c5e02f6f5c28929f2746db8" protocol=ttrpc version=3 May 27 18:24:09.017835 systemd[1]: Started cri-containerd-bd36727705ded3886de863822826b236735a1b07f25bc65d1c92961a7a6b0094.scope - libcontainer container bd36727705ded3886de863822826b236735a1b07f25bc65d1c92961a7a6b0094. May 27 18:24:09.088622 containerd[1585]: time="2025-05-27T18:24:09.088582542Z" level=info msg="StartContainer for \"bd36727705ded3886de863822826b236735a1b07f25bc65d1c92961a7a6b0094\" returns successfully" May 27 18:24:09.114732 kubelet[2794]: I0527 18:24:09.114444 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rmtkq" podStartSLOduration=31.775530892 podStartE2EDuration="46.114406008s" podCreationTimestamp="2025-05-27 18:23:23 +0000 UTC" firstStartedPulling="2025-05-27 18:23:54.61287085 +0000 UTC m=+53.240623915" lastFinishedPulling="2025-05-27 18:24:08.951745976 +0000 UTC m=+67.579499031" observedRunningTime="2025-05-27 18:24:09.113459639 +0000 UTC m=+67.741212704" watchObservedRunningTime="2025-05-27 18:24:09.114406008 +0000 UTC m=+67.742159063" May 27 18:24:09.762783 kubelet[2794]: I0527 18:24:09.762015 2794 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 27 18:24:09.762783 kubelet[2794]: I0527 18:24:09.762294 2794 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 27 18:24:16.540990 containerd[1585]: time="2025-05-27T18:24:16.540057948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 18:24:16.880814 containerd[1585]: time="2025-05-27T18:24:16.880510453Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:24:16.883499 containerd[1585]: time="2025-05-27T18:24:16.883322547Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:24:16.883834 containerd[1585]: time="2025-05-27T18:24:16.883365736Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 18:24:16.884641 kubelet[2794]: E0527 18:24:16.884447 2794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 18:24:16.885799 kubelet[2794]: E0527 18:24:16.884661 2794 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 18:24:16.885799 kubelet[2794]: E0527 18:24:16.885435 2794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k62cm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-pdb49_calico-system(bbd4fd77-1837-421d-a4cd-17332246cc0a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:24:16.891505 kubelet[2794]: E0527 18:24:16.891358 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:24:17.117503 containerd[1585]: time="2025-05-27T18:24:17.117435225Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa387d247c6751a9fd3e4bb2659b7f075b4326082ca83aebfaca596f8e54d0a6\" id:\"daab3cdab432f66281a64b60f269637688c91f75f7ff691063596d1945299d8e\" pid:5054 exited_at:{seconds:1748370257 nanos:116845426}" May 27 18:24:19.537218 containerd[1585]: time="2025-05-27T18:24:19.537054953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 18:24:19.904719 containerd[1585]: time="2025-05-27T18:24:19.904546986Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:24:19.906441 containerd[1585]: time="2025-05-27T18:24:19.906401703Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:24:19.906594 containerd[1585]: time="2025-05-27T18:24:19.906490916Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 18:24:19.906800 kubelet[2794]: E0527 18:24:19.906695 2794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 18:24:19.906800 kubelet[2794]: E0527 18:24:19.906756 2794 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 18:24:19.907205 kubelet[2794]: E0527 18:24:19.906870 2794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7721ee7d67474ccca7a05148d0f91fc5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wqlkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f7dd7cb5-258cr_calico-system(107d1d0f-9dd5-43de-81cb-0f8c43731395): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:24:19.909172 containerd[1585]: time="2025-05-27T18:24:19.909144741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 18:24:20.281750 containerd[1585]: time="2025-05-27T18:24:20.281538197Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:24:20.284743 containerd[1585]: time="2025-05-27T18:24:20.284514884Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:24:20.285611 containerd[1585]: time="2025-05-27T18:24:20.284560729Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 18:24:20.286859 kubelet[2794]: E0527 18:24:20.286771 2794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 18:24:20.287877 kubelet[2794]: E0527 18:24:20.286954 2794 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 18:24:20.287877 kubelet[2794]: E0527 18:24:20.287441 2794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqlkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f7dd7cb5-258cr_calico-system(107d1d0f-9dd5-43de-81cb-0f8c43731395): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:24:20.288807 kubelet[2794]: E0527 18:24:20.288753 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:24:29.545784 kubelet[2794]: E0527 18:24:29.544916 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:24:31.891659 update_engine[1501]: I20250527 18:24:31.891462 1501 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 27 18:24:31.892274 update_engine[1501]: I20250527 18:24:31.891618 1501 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 27 18:24:31.892661 update_engine[1501]: I20250527 18:24:31.892625 1501 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 27 18:24:31.895207 update_engine[1501]: I20250527 18:24:31.895159 1501 omaha_request_params.cc:62] Current group set to alpha May 27 18:24:31.895652 update_engine[1501]: I20250527 18:24:31.895612 1501 update_attempter.cc:499] Already updated boot flags. Skipping. May 27 18:24:31.895652 update_engine[1501]: I20250527 18:24:31.895638 1501 update_attempter.cc:643] Scheduling an action processor start. May 27 18:24:31.898358 update_engine[1501]: I20250527 18:24:31.896786 1501 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 27 18:24:31.898358 update_engine[1501]: I20250527 18:24:31.896939 1501 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 27 18:24:31.898358 update_engine[1501]: I20250527 18:24:31.897056 1501 omaha_request_action.cc:271] Posting an Omaha request to disabled May 27 18:24:31.898358 update_engine[1501]: I20250527 18:24:31.897069 1501 omaha_request_action.cc:272] Request: May 27 18:24:31.898358 update_engine[1501]: May 27 18:24:31.898358 update_engine[1501]: May 27 18:24:31.898358 update_engine[1501]: May 27 18:24:31.898358 update_engine[1501]: May 27 18:24:31.898358 update_engine[1501]: May 27 18:24:31.898358 update_engine[1501]: May 27 18:24:31.898358 update_engine[1501]: May 27 18:24:31.898358 update_engine[1501]: May 27 18:24:31.898358 update_engine[1501]: I20250527 18:24:31.897087 1501 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 18:24:31.907152 update_engine[1501]: I20250527 18:24:31.907114 1501 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 18:24:31.907998 update_engine[1501]: I20250527 18:24:31.907958 1501 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 18:24:31.913281 update_engine[1501]: E20250527 18:24:31.913249 1501 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 18:24:31.913492 update_engine[1501]: I20250527 18:24:31.913470 1501 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 27 18:24:31.918360 locksmithd[1524]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 27 18:24:34.538511 kubelet[2794]: E0527 18:24:34.537491 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:24:35.631726 containerd[1585]: time="2025-05-27T18:24:35.631574127Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f\" id:\"2ae9af57fac142fdfa0dc2ccb9448aadd3484440c19fc649b9ca14d4db454e0d\" pid:5088 exited_at:{seconds:1748370275 nanos:630567234}" May 27 18:24:38.223395 containerd[1585]: time="2025-05-27T18:24:38.223210259Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f\" id:\"97ac8527930d886ed47648ca418e4bbaba44052b885c217e4a11ba6e582ff40c\" pid:5110 exited_at:{seconds:1748370278 nanos:222458380}" May 27 18:24:40.543751 containerd[1585]: time="2025-05-27T18:24:40.543175819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 18:24:40.921478 containerd[1585]: time="2025-05-27T18:24:40.921305257Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:24:40.924760 containerd[1585]: time="2025-05-27T18:24:40.924674740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 18:24:40.925013 containerd[1585]: time="2025-05-27T18:24:40.924845812Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:24:40.926166 kubelet[2794]: E0527 18:24:40.926045 2794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 18:24:40.927164 kubelet[2794]: E0527 18:24:40.926188 2794 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 18:24:40.927164 kubelet[2794]: E0527 18:24:40.926480 2794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k62cm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-pdb49_calico-system(bbd4fd77-1837-421d-a4cd-17332246cc0a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:24:40.928798 kubelet[2794]: E0527 18:24:40.928741 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:24:41.877119 update_engine[1501]: I20250527 18:24:41.876936 1501 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 18:24:41.877775 update_engine[1501]: I20250527 18:24:41.877569 1501 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 18:24:41.878178 update_engine[1501]: I20250527 18:24:41.878147 1501 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 18:24:41.883358 update_engine[1501]: E20250527 18:24:41.883320 1501 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 18:24:41.883472 update_engine[1501]: I20250527 18:24:41.883410 1501 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 27 18:24:47.070876 containerd[1585]: time="2025-05-27T18:24:47.070530742Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa387d247c6751a9fd3e4bb2659b7f075b4326082ca83aebfaca596f8e54d0a6\" id:\"e95837ab2a74f5a41315d8f8e0fa06aefb7d72f0ce86517b98405e76afded688\" pid:5134 exited_at:{seconds:1748370287 nanos:68797895}" May 27 18:24:49.542023 containerd[1585]: time="2025-05-27T18:24:49.540932351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 18:24:49.894368 containerd[1585]: time="2025-05-27T18:24:49.893963490Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:24:49.896989 containerd[1585]: time="2025-05-27T18:24:49.896934029Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:24:49.897244 containerd[1585]: time="2025-05-27T18:24:49.896987692Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 18:24:49.897677 kubelet[2794]: E0527 18:24:49.897597 2794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 18:24:49.900307 kubelet[2794]: E0527 18:24:49.898339 2794 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 18:24:49.900307 kubelet[2794]: E0527 18:24:49.899356 2794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7721ee7d67474ccca7a05148d0f91fc5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wqlkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f7dd7cb5-258cr_calico-system(107d1d0f-9dd5-43de-81cb-0f8c43731395): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:24:49.902285 containerd[1585]: time="2025-05-27T18:24:49.902214190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 18:24:50.266263 containerd[1585]: time="2025-05-27T18:24:50.266140936Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:24:50.268183 containerd[1585]: time="2025-05-27T18:24:50.268088167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 18:24:50.268466 containerd[1585]: time="2025-05-27T18:24:50.268225700Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:24:50.269390 kubelet[2794]: E0527 18:24:50.269278 2794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 18:24:50.269974 kubelet[2794]: E0527 18:24:50.269446 2794 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 18:24:50.269974 kubelet[2794]: E0527 18:24:50.269774 2794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqlkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f7dd7cb5-258cr_calico-system(107d1d0f-9dd5-43de-81cb-0f8c43731395): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:24:50.271707 kubelet[2794]: E0527 18:24:50.271033 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:24:51.878285 update_engine[1501]: I20250527 18:24:51.878067 1501 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 18:24:51.878934 update_engine[1501]: I20250527 18:24:51.878806 1501 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 18:24:51.879434 update_engine[1501]: I20250527 18:24:51.879396 1501 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 18:24:51.884879 update_engine[1501]: E20250527 18:24:51.884816 1501 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 18:24:51.884955 update_engine[1501]: I20250527 18:24:51.884902 1501 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 27 18:24:56.537625 kubelet[2794]: E0527 18:24:56.537407 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:25:01.875571 update_engine[1501]: I20250527 18:25:01.875418 1501 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 18:25:01.876636 update_engine[1501]: I20250527 18:25:01.876143 1501 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 18:25:01.876826 update_engine[1501]: I20250527 18:25:01.876783 1501 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 18:25:01.882431 update_engine[1501]: E20250527 18:25:01.882337 1501 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 18:25:01.882431 update_engine[1501]: I20250527 18:25:01.882438 1501 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 27 18:25:01.882898 update_engine[1501]: I20250527 18:25:01.882491 1501 omaha_request_action.cc:617] Omaha request response: May 27 18:25:01.883113 update_engine[1501]: E20250527 18:25:01.883029 1501 omaha_request_action.cc:636] Omaha request network transfer failed. May 27 18:25:01.883629 update_engine[1501]: I20250527 18:25:01.883558 1501 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 27 18:25:01.883629 update_engine[1501]: I20250527 18:25:01.883589 1501 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 27 18:25:01.883629 update_engine[1501]: I20250527 18:25:01.883610 1501 update_attempter.cc:306] Processing Done. May 27 18:25:01.883930 update_engine[1501]: E20250527 18:25:01.883869 1501 update_attempter.cc:619] Update failed. May 27 18:25:01.883930 update_engine[1501]: I20250527 18:25:01.883919 1501 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 27 18:25:01.884241 update_engine[1501]: I20250527 18:25:01.883931 1501 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 27 18:25:01.884241 update_engine[1501]: I20250527 18:25:01.883944 1501 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 27 18:25:01.884423 update_engine[1501]: I20250527 18:25:01.884388 1501 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 27 18:25:01.885216 update_engine[1501]: I20250527 18:25:01.884560 1501 omaha_request_action.cc:271] Posting an Omaha request to disabled May 27 18:25:01.885216 update_engine[1501]: I20250527 18:25:01.884593 1501 omaha_request_action.cc:272] Request: May 27 18:25:01.885216 update_engine[1501]: May 27 18:25:01.885216 update_engine[1501]: May 27 18:25:01.885216 update_engine[1501]: May 27 18:25:01.885216 update_engine[1501]: May 27 18:25:01.885216 update_engine[1501]: May 27 18:25:01.885216 update_engine[1501]: May 27 18:25:01.885216 update_engine[1501]: I20250527 18:25:01.884608 1501 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 18:25:01.885216 update_engine[1501]: I20250527 18:25:01.885095 1501 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 18:25:01.886357 update_engine[1501]: I20250527 18:25:01.885581 1501 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 18:25:01.888267 locksmithd[1524]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 27 18:25:01.891389 update_engine[1501]: E20250527 18:25:01.891305 1501 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 18:25:01.891543 update_engine[1501]: I20250527 18:25:01.891400 1501 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 27 18:25:01.891543 update_engine[1501]: I20250527 18:25:01.891419 1501 omaha_request_action.cc:617] Omaha request response: May 27 18:25:01.891543 update_engine[1501]: I20250527 18:25:01.891433 1501 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 27 18:25:01.891543 update_engine[1501]: I20250527 18:25:01.891443 1501 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 27 18:25:01.891543 update_engine[1501]: I20250527 18:25:01.891454 1501 update_attempter.cc:306] Processing Done. May 27 18:25:01.891543 update_engine[1501]: I20250527 18:25:01.891466 1501 update_attempter.cc:310] Error event sent. May 27 18:25:01.892074 update_engine[1501]: I20250527 18:25:01.891507 1501 update_check_scheduler.cc:74] Next update check in 44m38s May 27 18:25:01.893175 locksmithd[1524]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 27 18:25:02.539566 kubelet[2794]: E0527 18:25:02.539402 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:25:08.270268 containerd[1585]: time="2025-05-27T18:25:08.270181056Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f\" id:\"6cb158e057fc5af137edc123cad5361674668bb1024f2ae0a2efcb7465e7ff4f\" pid:5160 exited_at:{seconds:1748370308 nanos:269386036}" May 27 18:25:08.537062 kubelet[2794]: E0527 18:25:08.536860 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:25:14.544525 kubelet[2794]: E0527 18:25:14.544226 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:25:17.081663 containerd[1585]: time="2025-05-27T18:25:17.081148742Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa387d247c6751a9fd3e4bb2659b7f075b4326082ca83aebfaca596f8e54d0a6\" id:\"9a978808e17951e1e94691a6e0de65628efcf172853d5be12d93e347ad3cdd4a\" pid:5191 exited_at:{seconds:1748370317 nanos:80372864}" May 27 18:25:19.536834 kubelet[2794]: E0527 18:25:19.536445 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:25:29.540952 kubelet[2794]: E0527 18:25:29.540277 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:25:30.538607 containerd[1585]: time="2025-05-27T18:25:30.538414627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 18:25:30.927973 containerd[1585]: time="2025-05-27T18:25:30.927349983Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:25:30.929846 containerd[1585]: time="2025-05-27T18:25:30.929764780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 18:25:30.930123 containerd[1585]: time="2025-05-27T18:25:30.929979566Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:25:30.931965 kubelet[2794]: E0527 18:25:30.930970 2794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 18:25:30.931965 kubelet[2794]: E0527 18:25:30.931163 2794 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 18:25:30.935527 kubelet[2794]: E0527 18:25:30.935255 2794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k62cm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-pdb49_calico-system(bbd4fd77-1837-421d-a4cd-17332246cc0a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:25:30.937723 kubelet[2794]: E0527 18:25:30.937269 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:25:35.592654 containerd[1585]: time="2025-05-27T18:25:35.592568920Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f\" id:\"228c629974629250baf96c91fb0724854512e06b9a5ad939d22ead7383251240\" pid:5235 exited_at:{seconds:1748370335 nanos:591880857}" May 27 18:25:38.272143 containerd[1585]: time="2025-05-27T18:25:38.272083987Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f\" id:\"ca5d9028adfd6ec11ee949b6443a1c0ec217880adf47de9de8bbe9c54e695f21\" pid:5255 exited_at:{seconds:1748370338 nanos:270233609}" May 27 18:25:40.539182 containerd[1585]: time="2025-05-27T18:25:40.539062253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 18:25:41.292766 containerd[1585]: time="2025-05-27T18:25:41.292577482Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:25:41.294892 containerd[1585]: time="2025-05-27T18:25:41.294795164Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:25:41.295061 containerd[1585]: time="2025-05-27T18:25:41.294970845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 18:25:41.295701 kubelet[2794]: E0527 18:25:41.295540 2794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 18:25:41.296610 kubelet[2794]: E0527 18:25:41.295752 2794 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 18:25:41.296610 kubelet[2794]: E0527 18:25:41.296157 2794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7721ee7d67474ccca7a05148d0f91fc5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wqlkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f7dd7cb5-258cr_calico-system(107d1d0f-9dd5-43de-81cb-0f8c43731395): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:25:41.300735 containerd[1585]: time="2025-05-27T18:25:41.300534459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 18:25:41.645259 containerd[1585]: time="2025-05-27T18:25:41.644951760Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:25:41.649421 containerd[1585]: time="2025-05-27T18:25:41.649227024Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:25:41.649869 containerd[1585]: time="2025-05-27T18:25:41.649272002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 18:25:41.650722 kubelet[2794]: E0527 18:25:41.650546 2794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 18:25:41.651117 kubelet[2794]: E0527 18:25:41.651036 2794 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 18:25:41.652029 kubelet[2794]: E0527 18:25:41.651764 2794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqlkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f7dd7cb5-258cr_calico-system(107d1d0f-9dd5-43de-81cb-0f8c43731395): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:25:41.653609 kubelet[2794]: E0527 18:25:41.653414 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:25:42.539560 kubelet[2794]: E0527 18:25:42.539474 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:25:47.080291 containerd[1585]: time="2025-05-27T18:25:47.080111765Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa387d247c6751a9fd3e4bb2659b7f075b4326082ca83aebfaca596f8e54d0a6\" id:\"8c4678b714b4e4d6f21568f5402e6cb5142d77c3eb8749f41960751d1ebe5194\" pid:5280 exited_at:{seconds:1748370347 nanos:79255434}" May 27 18:25:52.541927 kubelet[2794]: E0527 18:25:52.541645 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:25:54.539876 kubelet[2794]: E0527 18:25:54.539506 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:26:06.538994 kubelet[2794]: E0527 18:26:06.538588 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:26:06.543611 kubelet[2794]: E0527 18:26:06.542401 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:26:08.285521 containerd[1585]: time="2025-05-27T18:26:08.285437186Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f\" id:\"c88e15ab6a877e74671effa5822fee0cf94c85da0535ac366b4681ca488c3ade\" pid:5308 exited_at:{seconds:1748370368 nanos:285109819}" May 27 18:26:17.093104 containerd[1585]: time="2025-05-27T18:26:17.092877840Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa387d247c6751a9fd3e4bb2659b7f075b4326082ca83aebfaca596f8e54d0a6\" id:\"298fe20a75b3aeca28a3898f8791073469e3b67f5ad3fc35000f1513c5c1aade\" pid:5331 exited_at:{seconds:1748370377 nanos:92513714}" May 27 18:26:17.559924 kubelet[2794]: E0527 18:26:17.559645 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:26:21.538850 kubelet[2794]: E0527 18:26:21.538571 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:26:28.539078 kubelet[2794]: E0527 18:26:28.538751 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:26:34.537070 kubelet[2794]: E0527 18:26:34.536903 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:26:35.614137 containerd[1585]: time="2025-05-27T18:26:35.614059601Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f\" id:\"3ae7ff3e6edd2e0d52b772458528a13c16b451b7352f5549b3165da9bd81f0ad\" pid:5363 exited_at:{seconds:1748370395 nanos:613430896}" May 27 18:26:38.258915 containerd[1585]: time="2025-05-27T18:26:38.258570109Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f\" id:\"dc0502d291e9101a0ba15e66d85d59334363084212eb6d10aefb0530767e8648\" pid:5383 exited_at:{seconds:1748370398 nanos:257492373}" May 27 18:26:42.538807 kubelet[2794]: E0527 18:26:42.538519 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:26:46.537337 kubelet[2794]: E0527 18:26:46.536907 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:26:47.078613 containerd[1585]: time="2025-05-27T18:26:47.078521179Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa387d247c6751a9fd3e4bb2659b7f075b4326082ca83aebfaca596f8e54d0a6\" id:\"d7e74884f18c5edc1e0c9d7e396a6c6255c408207dfdd2fc721170767446bfe4\" pid:5407 exited_at:{seconds:1748370407 nanos:77312598}" May 27 18:26:53.541616 kubelet[2794]: E0527 18:26:53.541551 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:27:01.539553 containerd[1585]: time="2025-05-27T18:27:01.538374843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 18:27:01.968256 containerd[1585]: time="2025-05-27T18:27:01.967298496Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:27:01.969986 containerd[1585]: time="2025-05-27T18:27:01.969865077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 18:27:01.970170 containerd[1585]: time="2025-05-27T18:27:01.969940686Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:27:01.971044 kubelet[2794]: E0527 18:27:01.970873 2794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 18:27:01.972966 kubelet[2794]: E0527 18:27:01.971087 2794 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 18:27:01.972966 kubelet[2794]: E0527 18:27:01.971658 2794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k62cm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-pdb49_calico-system(bbd4fd77-1837-421d-a4cd-17332246cc0a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:27:01.973576 kubelet[2794]: E0527 18:27:01.973279 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:27:05.538198 containerd[1585]: time="2025-05-27T18:27:05.538053495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 18:27:05.961210 containerd[1585]: time="2025-05-27T18:27:05.960509060Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:27:05.963035 containerd[1585]: time="2025-05-27T18:27:05.962954482Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:27:05.963229 containerd[1585]: time="2025-05-27T18:27:05.963104408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 18:27:05.963489 kubelet[2794]: E0527 18:27:05.963417 2794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 18:27:05.964206 kubelet[2794]: E0527 18:27:05.963506 2794 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 18:27:05.964513 kubelet[2794]: E0527 18:27:05.963995 2794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7721ee7d67474ccca7a05148d0f91fc5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wqlkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f7dd7cb5-258cr_calico-system(107d1d0f-9dd5-43de-81cb-0f8c43731395): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:27:05.967668 containerd[1585]: time="2025-05-27T18:27:05.967583354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 18:27:06.304597 containerd[1585]: time="2025-05-27T18:27:06.304495534Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:27:06.307011 containerd[1585]: time="2025-05-27T18:27:06.306818543Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:27:06.307011 containerd[1585]: time="2025-05-27T18:27:06.306926262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 18:27:06.307456 kubelet[2794]: E0527 18:27:06.307182 2794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 18:27:06.307456 kubelet[2794]: E0527 18:27:06.307271 2794 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 18:27:06.307797 kubelet[2794]: E0527 18:27:06.307502 2794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqlkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f7dd7cb5-258cr_calico-system(107d1d0f-9dd5-43de-81cb-0f8c43731395): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:27:06.309565 kubelet[2794]: E0527 18:27:06.309447 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:27:08.270292 containerd[1585]: time="2025-05-27T18:27:08.268673790Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f\" id:\"51749593bf09ac6a04f496f4514d8030ccb200e63f6899b55f90c90997c6e94c\" pid:5452 exited_at:{seconds:1748370428 nanos:267904932}" May 27 18:27:16.539352 kubelet[2794]: E0527 18:27:16.539058 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:27:17.128293 containerd[1585]: time="2025-05-27T18:27:17.128198184Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa387d247c6751a9fd3e4bb2659b7f075b4326082ca83aebfaca596f8e54d0a6\" id:\"dc16f791cac39a2697ecea72ec21cb08ce048a3e6d18195bcb49e98a41df029b\" pid:5482 exited_at:{seconds:1748370437 nanos:127464091}" May 27 18:27:19.541338 kubelet[2794]: E0527 18:27:19.540736 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:27:27.542859 kubelet[2794]: E0527 18:27:27.542605 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:27:32.541563 kubelet[2794]: E0527 18:27:32.541326 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:27:35.608899 containerd[1585]: time="2025-05-27T18:27:35.608814232Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f\" id:\"41c09748e49da6acb341b5785f3b6bc188ad506f35623cb2c6a2d1622060a328\" pid:5507 exited_at:{seconds:1748370455 nanos:608410115}" May 27 18:27:38.276042 containerd[1585]: time="2025-05-27T18:27:38.275971181Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f\" id:\"3ddcf97b6670dcdc14373c7c2b6627fae9c82280a05ff76e94fb84fcc2bcb6ea\" pid:5529 exited_at:{seconds:1748370458 nanos:275247223}" May 27 18:27:40.538751 kubelet[2794]: E0527 18:27:40.538074 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:27:47.099277 containerd[1585]: time="2025-05-27T18:27:47.099217228Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa387d247c6751a9fd3e4bb2659b7f075b4326082ca83aebfaca596f8e54d0a6\" id:\"f2ae5d79a6506e4ccd6bccc6a6f6608297032e8fae113ff81ee60ca6b26e9f6a\" pid:5553 exited_at:{seconds:1748370467 nanos:98780186}" May 27 18:27:47.541802 kubelet[2794]: E0527 18:27:47.541347 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:27:55.541743 kubelet[2794]: E0527 18:27:55.540568 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:27:55.964143 containerd[1585]: time="2025-05-27T18:27:55.962902751Z" level=warning msg="container event discarded" container=24ef6ca9a2764ee91f762a086bbdb668c59a38f4f0ad06736f0de75be90d51f3 type=CONTAINER_CREATED_EVENT May 27 18:27:55.964143 containerd[1585]: time="2025-05-27T18:27:55.963725573Z" level=warning msg="container event discarded" container=24ef6ca9a2764ee91f762a086bbdb668c59a38f4f0ad06736f0de75be90d51f3 type=CONTAINER_STARTED_EVENT May 27 18:27:55.976439 containerd[1585]: time="2025-05-27T18:27:55.976315141Z" level=warning msg="container event discarded" container=2e4a8bbefb92b10fcb94f5c9dd0c07aac88e77d585eeaad5ad3ebd5c3d6d93fe type=CONTAINER_CREATED_EVENT May 27 18:27:55.976439 containerd[1585]: time="2025-05-27T18:27:55.976387347Z" level=warning msg="container event discarded" container=2e4a8bbefb92b10fcb94f5c9dd0c07aac88e77d585eeaad5ad3ebd5c3d6d93fe type=CONTAINER_STARTED_EVENT May 27 18:27:55.996365 containerd[1585]: time="2025-05-27T18:27:55.996233147Z" level=warning msg="container event discarded" container=1b98f61f39d5c2d5e57f513ca1220d733ed1eb1f4cf4ed99311192a6b08a46e5 type=CONTAINER_CREATED_EVENT May 27 18:27:55.996365 containerd[1585]: time="2025-05-27T18:27:55.996320241Z" level=warning msg="container event discarded" container=1b98f61f39d5c2d5e57f513ca1220d733ed1eb1f4cf4ed99311192a6b08a46e5 type=CONTAINER_STARTED_EVENT May 27 18:27:56.025122 containerd[1585]: time="2025-05-27T18:27:56.025041543Z" level=warning msg="container event discarded" container=1a1b2d49c5973cd69fce25e3e4ba8ef7db6bcdb9991b398a2d5b8433f366265a type=CONTAINER_CREATED_EVENT May 27 18:27:56.041928 containerd[1585]: time="2025-05-27T18:27:56.041622733Z" level=warning msg="container event discarded" container=bd467ebb8fdcfd08c6eead50282dec48c1592bf2e730666a8c091b8e57cd7e30 type=CONTAINER_CREATED_EVENT May 27 18:27:56.041928 containerd[1585]: time="2025-05-27T18:27:56.041852036Z" level=warning msg="container event discarded" container=d8334afa42bcc75ee9142ee63811d0e83f2ba68ad2d092c46eb24065649abd3e type=CONTAINER_CREATED_EVENT May 27 18:27:56.162468 containerd[1585]: time="2025-05-27T18:27:56.162321298Z" level=warning msg="container event discarded" container=1a1b2d49c5973cd69fce25e3e4ba8ef7db6bcdb9991b398a2d5b8433f366265a type=CONTAINER_STARTED_EVENT May 27 18:27:56.162468 containerd[1585]: time="2025-05-27T18:27:56.162416599Z" level=warning msg="container event discarded" container=bd467ebb8fdcfd08c6eead50282dec48c1592bf2e730666a8c091b8e57cd7e30 type=CONTAINER_STARTED_EVENT May 27 18:27:56.205908 containerd[1585]: time="2025-05-27T18:27:56.205780694Z" level=warning msg="container event discarded" container=d8334afa42bcc75ee9142ee63811d0e83f2ba68ad2d092c46eb24065649abd3e type=CONTAINER_STARTED_EVENT May 27 18:28:01.543288 kubelet[2794]: E0527 18:28:01.542925 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:28:06.849364 systemd[1]: Started sshd@9-172.24.4.229:22-172.24.4.1:42174.service - OpenSSH per-connection server daemon (172.24.4.1:42174). May 27 18:28:07.539155 kubelet[2794]: E0527 18:28:07.539054 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:28:08.242557 containerd[1585]: time="2025-05-27T18:28:08.242342361Z" level=warning msg="container event discarded" container=f420eec0ae2d172129d486f4cbf8be6f2136fd0074871c5a10e5f4d79e59a9a0 type=CONTAINER_CREATED_EVENT May 27 18:28:08.242557 containerd[1585]: time="2025-05-27T18:28:08.242459293Z" level=warning msg="container event discarded" container=f420eec0ae2d172129d486f4cbf8be6f2136fd0074871c5a10e5f4d79e59a9a0 type=CONTAINER_STARTED_EVENT May 27 18:28:08.268790 containerd[1585]: time="2025-05-27T18:28:08.268714544Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f\" id:\"5aecfb4d977fb48f1dc5f20aafd8df3a8134514297b08afd071a8bdb5b85e64f\" pid:5586 exited_at:{seconds:1748370488 nanos:267787328}" May 27 18:28:08.283074 sshd[5570]: Accepted publickey for core from 172.24.4.1 port 42174 ssh2: RSA SHA256:D6TRbEcVa7iPKsFjaVce7/kj7tFlICZCjA4HGLeEPTo May 27 18:28:08.303387 containerd[1585]: time="2025-05-27T18:28:08.303314262Z" level=warning msg="container event discarded" container=016aa3a3a62cf3fbc8e2f837e66ecf50356537399dec94240e6d369b9a2a607e type=CONTAINER_CREATED_EVENT May 27 18:28:08.309244 sshd-session[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:28:08.323041 systemd-logind[1498]: New session 12 of user core. May 27 18:28:08.339010 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 18:28:08.380308 containerd[1585]: time="2025-05-27T18:28:08.379773559Z" level=warning msg="container event discarded" container=1399ad74772ed4beda5f75ba1fa2a2701f330e4feef3c79f526ec0189c70eb61 type=CONTAINER_CREATED_EVENT May 27 18:28:08.380308 containerd[1585]: time="2025-05-27T18:28:08.380299535Z" level=warning msg="container event discarded" container=1399ad74772ed4beda5f75ba1fa2a2701f330e4feef3c79f526ec0189c70eb61 type=CONTAINER_STARTED_EVENT May 27 18:28:08.411652 containerd[1585]: time="2025-05-27T18:28:08.411576799Z" level=warning msg="container event discarded" container=016aa3a3a62cf3fbc8e2f837e66ecf50356537399dec94240e6d369b9a2a607e type=CONTAINER_STARTED_EVENT May 27 18:28:09.150714 sshd[5595]: Connection closed by 172.24.4.1 port 42174 May 27 18:28:09.149583 sshd-session[5570]: pam_unix(sshd:session): session closed for user core May 27 18:28:09.156910 systemd[1]: sshd@9-172.24.4.229:22-172.24.4.1:42174.service: Deactivated successfully. May 27 18:28:09.161940 systemd[1]: session-12.scope: Deactivated successfully. May 27 18:28:09.163769 systemd-logind[1498]: Session 12 logged out. Waiting for processes to exit. May 27 18:28:09.167641 systemd-logind[1498]: Removed session 12. May 27 18:28:11.017009 containerd[1585]: time="2025-05-27T18:28:11.016858553Z" level=warning msg="container event discarded" container=2d78d2df8c6771314108c38cd45ab4dbf3e6fca81eac3ac03c06ed5b0ffb7dc3 type=CONTAINER_CREATED_EVENT May 27 18:28:11.077545 containerd[1585]: time="2025-05-27T18:28:11.077415766Z" level=warning msg="container event discarded" container=2d78d2df8c6771314108c38cd45ab4dbf3e6fca81eac3ac03c06ed5b0ffb7dc3 type=CONTAINER_STARTED_EVENT May 27 18:28:14.175729 systemd[1]: Started sshd@10-172.24.4.229:22-172.24.4.1:60894.service - OpenSSH per-connection server daemon (172.24.4.1:60894). May 27 18:28:14.545193 kubelet[2794]: E0527 18:28:14.543918 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:28:15.646031 sshd[5628]: Accepted publickey for core from 172.24.4.1 port 60894 ssh2: RSA SHA256:D6TRbEcVa7iPKsFjaVce7/kj7tFlICZCjA4HGLeEPTo May 27 18:28:15.649785 sshd-session[5628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:28:15.678031 systemd-logind[1498]: New session 13 of user core. May 27 18:28:15.685970 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 18:28:16.737613 sshd[5630]: Connection closed by 172.24.4.1 port 60894 May 27 18:28:16.751203 sshd-session[5628]: pam_unix(sshd:session): session closed for user core May 27 18:28:16.757540 systemd[1]: sshd@10-172.24.4.229:22-172.24.4.1:60894.service: Deactivated successfully. May 27 18:28:16.761076 systemd[1]: session-13.scope: Deactivated successfully. May 27 18:28:16.762753 systemd-logind[1498]: Session 13 logged out. Waiting for processes to exit. May 27 18:28:16.768445 systemd-logind[1498]: Removed session 13. May 27 18:28:17.118567 containerd[1585]: time="2025-05-27T18:28:17.118312075Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa387d247c6751a9fd3e4bb2659b7f075b4326082ca83aebfaca596f8e54d0a6\" id:\"2559968d9db9278308b3f5db408e6a31754f9988e70665a12ff59cec77e5e72a\" pid:5653 exited_at:{seconds:1748370497 nanos:116284086}" May 27 18:28:18.538609 kubelet[2794]: E0527 18:28:18.538509 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:28:21.767895 systemd[1]: Started sshd@11-172.24.4.229:22-172.24.4.1:60898.service - OpenSSH per-connection server daemon (172.24.4.1:60898). May 27 18:28:23.181745 sshd[5667]: Accepted publickey for core from 172.24.4.1 port 60898 ssh2: RSA SHA256:D6TRbEcVa7iPKsFjaVce7/kj7tFlICZCjA4HGLeEPTo May 27 18:28:23.185287 sshd-session[5667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:28:23.204837 systemd-logind[1498]: New session 14 of user core. May 27 18:28:23.217067 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 18:28:23.598187 containerd[1585]: time="2025-05-27T18:28:23.597913969Z" level=warning msg="container event discarded" container=87b4dbd1d9bc7324f49c6fb17ba520db9816fe166689b6f094da03afb934ce4b type=CONTAINER_CREATED_EVENT May 27 18:28:23.598187 containerd[1585]: time="2025-05-27T18:28:23.598124519Z" level=warning msg="container event discarded" container=87b4dbd1d9bc7324f49c6fb17ba520db9816fe166689b6f094da03afb934ce4b type=CONTAINER_STARTED_EVENT May 27 18:28:23.927212 sshd[5669]: Connection closed by 172.24.4.1 port 60898 May 27 18:28:23.928320 sshd-session[5667]: pam_unix(sshd:session): session closed for user core May 27 18:28:23.937376 systemd[1]: sshd@11-172.24.4.229:22-172.24.4.1:60898.service: Deactivated successfully. May 27 18:28:23.946164 systemd[1]: session-14.scope: Deactivated successfully. May 27 18:28:23.948950 systemd-logind[1498]: Session 14 logged out. Waiting for processes to exit. May 27 18:28:23.953102 systemd-logind[1498]: Removed session 14. May 27 18:28:24.916182 containerd[1585]: time="2025-05-27T18:28:24.915732525Z" level=warning msg="container event discarded" container=955024f3ddb17865c6d4d650867b81656b4c9d7e534fb77169656d6e73c1a85e type=CONTAINER_CREATED_EVENT May 27 18:28:24.916182 containerd[1585]: time="2025-05-27T18:28:24.915991057Z" level=warning msg="container event discarded" container=955024f3ddb17865c6d4d650867b81656b4c9d7e534fb77169656d6e73c1a85e type=CONTAINER_STARTED_EVENT May 27 18:28:25.543720 kubelet[2794]: E0527 18:28:25.543228 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:28:25.942983 containerd[1585]: time="2025-05-27T18:28:25.941810201Z" level=warning msg="container event discarded" container=f8656a772b1d841043c162ed265b40ef4d6d604c4e1d19b2ab5cc4caaf732075 type=CONTAINER_CREATED_EVENT May 27 18:28:26.040525 containerd[1585]: time="2025-05-27T18:28:26.040398345Z" level=warning msg="container event discarded" container=f8656a772b1d841043c162ed265b40ef4d6d604c4e1d19b2ab5cc4caaf732075 type=CONTAINER_STARTED_EVENT May 27 18:28:26.510346 containerd[1585]: time="2025-05-27T18:28:26.510237298Z" level=warning msg="container event discarded" container=f8656a772b1d841043c162ed265b40ef4d6d604c4e1d19b2ab5cc4caaf732075 type=CONTAINER_STOPPED_EVENT May 27 18:28:28.969438 systemd[1]: Started sshd@12-172.24.4.229:22-172.24.4.1:60694.service - OpenSSH per-connection server daemon (172.24.4.1:60694). May 27 18:28:29.784510 containerd[1585]: time="2025-05-27T18:28:29.784212808Z" level=warning msg="container event discarded" container=f8ff9000ea20ca0de1f15eecce70409e5b7bb8bb9ace63d3a67f4d8d3ab6b11d type=CONTAINER_CREATED_EVENT May 27 18:28:29.895630 containerd[1585]: time="2025-05-27T18:28:29.895509768Z" level=warning msg="container event discarded" container=f8ff9000ea20ca0de1f15eecce70409e5b7bb8bb9ace63d3a67f4d8d3ab6b11d type=CONTAINER_STARTED_EVENT May 27 18:28:30.035767 sshd[5683]: Accepted publickey for core from 172.24.4.1 port 60694 ssh2: RSA SHA256:D6TRbEcVa7iPKsFjaVce7/kj7tFlICZCjA4HGLeEPTo May 27 18:28:30.038192 sshd-session[5683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:28:30.049425 systemd-logind[1498]: New session 15 of user core. May 27 18:28:30.057938 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 18:28:30.538749 kubelet[2794]: E0527 18:28:30.537711 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:28:30.818359 sshd[5685]: Connection closed by 172.24.4.1 port 60694 May 27 18:28:30.819136 sshd-session[5683]: pam_unix(sshd:session): session closed for user core May 27 18:28:30.843419 systemd[1]: sshd@12-172.24.4.229:22-172.24.4.1:60694.service: Deactivated successfully. May 27 18:28:30.855634 systemd[1]: session-15.scope: Deactivated successfully. May 27 18:28:30.861161 systemd-logind[1498]: Session 15 logged out. Waiting for processes to exit. May 27 18:28:30.873371 systemd[1]: Started sshd@13-172.24.4.229:22-172.24.4.1:60702.service - OpenSSH per-connection server daemon (172.24.4.1:60702). May 27 18:28:30.876467 systemd-logind[1498]: Removed session 15. May 27 18:28:32.139737 sshd[5697]: Accepted publickey for core from 172.24.4.1 port 60702 ssh2: RSA SHA256:D6TRbEcVa7iPKsFjaVce7/kj7tFlICZCjA4HGLeEPTo May 27 18:28:32.143929 sshd-session[5697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:28:32.158885 systemd-logind[1498]: New session 16 of user core. May 27 18:28:32.166995 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 18:28:32.923803 sshd[5712]: Connection closed by 172.24.4.1 port 60702 May 27 18:28:32.921853 sshd-session[5697]: pam_unix(sshd:session): session closed for user core May 27 18:28:32.942984 systemd[1]: sshd@13-172.24.4.229:22-172.24.4.1:60702.service: Deactivated successfully. May 27 18:28:32.950938 systemd[1]: session-16.scope: Deactivated successfully. May 27 18:28:32.954324 systemd-logind[1498]: Session 16 logged out. Waiting for processes to exit. May 27 18:28:32.962272 systemd[1]: Started sshd@14-172.24.4.229:22-172.24.4.1:60714.service - OpenSSH per-connection server daemon (172.24.4.1:60714). May 27 18:28:32.965724 systemd-logind[1498]: Removed session 16. May 27 18:28:34.178126 sshd[5722]: Accepted publickey for core from 172.24.4.1 port 60714 ssh2: RSA SHA256:D6TRbEcVa7iPKsFjaVce7/kj7tFlICZCjA4HGLeEPTo May 27 18:28:34.180302 sshd-session[5722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:28:34.195051 systemd-logind[1498]: New session 17 of user core. May 27 18:28:34.205281 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 18:28:35.042811 containerd[1585]: time="2025-05-27T18:28:35.042383635Z" level=warning msg="container event discarded" container=2de708894997a9c926dfbe1773ed58663ad7420777914339b03be45e437f8d1c type=CONTAINER_CREATED_EVENT May 27 18:28:35.050336 sshd[5724]: Connection closed by 172.24.4.1 port 60714 May 27 18:28:35.051655 sshd-session[5722]: pam_unix(sshd:session): session closed for user core May 27 18:28:35.060104 systemd[1]: sshd@14-172.24.4.229:22-172.24.4.1:60714.service: Deactivated successfully. May 27 18:28:35.066847 systemd[1]: session-17.scope: Deactivated successfully. May 27 18:28:35.069487 systemd-logind[1498]: Session 17 logged out. Waiting for processes to exit. May 27 18:28:35.073637 systemd-logind[1498]: Removed session 17. May 27 18:28:35.135625 containerd[1585]: time="2025-05-27T18:28:35.135415845Z" level=warning msg="container event discarded" container=2de708894997a9c926dfbe1773ed58663ad7420777914339b03be45e437f8d1c type=CONTAINER_STARTED_EVENT May 27 18:28:35.600038 containerd[1585]: time="2025-05-27T18:28:35.599858977Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f\" id:\"e1c1e234277d6a31f0ec90768fe577f263a1532638f82c7f5dad15a345a652fd\" pid:5747 exited_at:{seconds:1748370515 nanos:599047391}" May 27 18:28:37.932996 containerd[1585]: time="2025-05-27T18:28:37.932830048Z" level=warning msg="container event discarded" container=2de708894997a9c926dfbe1773ed58663ad7420777914339b03be45e437f8d1c type=CONTAINER_STOPPED_EVENT May 27 18:28:38.268166 containerd[1585]: time="2025-05-27T18:28:38.267992892Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f\" id:\"830cb9a835edd89e4f013ba320136a3097a9003649e7de74d10b91dcc99fb1ba\" pid:5776 exited_at:{seconds:1748370518 nanos:267229137}" May 27 18:28:39.546181 kubelet[2794]: E0527 18:28:39.545662 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:28:40.085424 systemd[1]: Started sshd@15-172.24.4.229:22-172.24.4.1:35094.service - OpenSSH per-connection server daemon (172.24.4.1:35094). May 27 18:28:41.509615 sshd[5788]: Accepted publickey for core from 172.24.4.1 port 35094 ssh2: RSA SHA256:D6TRbEcVa7iPKsFjaVce7/kj7tFlICZCjA4HGLeEPTo May 27 18:28:41.512301 sshd-session[5788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:28:41.535325 systemd-logind[1498]: New session 18 of user core. May 27 18:28:41.541661 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 18:28:42.211169 sshd[5797]: Connection closed by 172.24.4.1 port 35094 May 27 18:28:42.210260 sshd-session[5788]: pam_unix(sshd:session): session closed for user core May 27 18:28:42.217547 systemd[1]: sshd@15-172.24.4.229:22-172.24.4.1:35094.service: Deactivated successfully. May 27 18:28:42.225280 systemd[1]: session-18.scope: Deactivated successfully. May 27 18:28:42.227075 systemd-logind[1498]: Session 18 logged out. Waiting for processes to exit. May 27 18:28:42.229733 systemd-logind[1498]: Removed session 18. May 27 18:28:43.539252 kubelet[2794]: E0527 18:28:43.539118 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:28:47.228629 systemd[1]: Started sshd@16-172.24.4.229:22-172.24.4.1:36492.service - OpenSSH per-connection server daemon (172.24.4.1:36492). May 27 18:28:47.346216 containerd[1585]: time="2025-05-27T18:28:47.346037160Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa387d247c6751a9fd3e4bb2659b7f075b4326082ca83aebfaca596f8e54d0a6\" id:\"d5cd988c7ca39fe9c01f90b3dd577912d2938be088cef20078acf2050ee53bfc\" pid:5822 exited_at:{seconds:1748370527 nanos:344248036}" May 27 18:28:48.359605 sshd[5834]: Accepted publickey for core from 172.24.4.1 port 36492 ssh2: RSA SHA256:D6TRbEcVa7iPKsFjaVce7/kj7tFlICZCjA4HGLeEPTo May 27 18:28:48.365375 sshd-session[5834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:28:48.392775 systemd-logind[1498]: New session 19 of user core. May 27 18:28:48.400293 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 18:28:48.692929 containerd[1585]: time="2025-05-27T18:28:48.692453884Z" level=warning msg="container event discarded" container=fa387d247c6751a9fd3e4bb2659b7f075b4326082ca83aebfaca596f8e54d0a6 type=CONTAINER_CREATED_EVENT May 27 18:28:48.894066 containerd[1585]: time="2025-05-27T18:28:48.893971980Z" level=warning msg="container event discarded" container=fa387d247c6751a9fd3e4bb2659b7f075b4326082ca83aebfaca596f8e54d0a6 type=CONTAINER_STARTED_EVENT May 27 18:28:49.114917 sshd[5836]: Connection closed by 172.24.4.1 port 36492 May 27 18:28:49.116553 sshd-session[5834]: pam_unix(sshd:session): session closed for user core May 27 18:28:49.143770 systemd[1]: sshd@16-172.24.4.229:22-172.24.4.1:36492.service: Deactivated successfully. May 27 18:28:49.151729 systemd[1]: session-19.scope: Deactivated successfully. May 27 18:28:49.157795 systemd-logind[1498]: Session 19 logged out. Waiting for processes to exit. May 27 18:28:49.161220 systemd-logind[1498]: Removed session 19. May 27 18:28:50.417965 containerd[1585]: time="2025-05-27T18:28:50.417670577Z" level=warning msg="container event discarded" container=aa843dfc111c484115eb80b16aac8a01a1f7bf177b036ef80416575be18a1e33 type=CONTAINER_CREATED_EVENT May 27 18:28:50.417965 containerd[1585]: time="2025-05-27T18:28:50.417944529Z" level=warning msg="container event discarded" container=aa843dfc111c484115eb80b16aac8a01a1f7bf177b036ef80416575be18a1e33 type=CONTAINER_STARTED_EVENT May 27 18:28:50.520375 containerd[1585]: time="2025-05-27T18:28:50.520274644Z" level=warning msg="container event discarded" container=06bb8fc11aede6dc364995f62a29f8f72bd635eaa5b5ae0c3083d7e13ca8d683 type=CONTAINER_CREATED_EVENT May 27 18:28:50.700405 containerd[1585]: time="2025-05-27T18:28:50.699944698Z" level=warning msg="container event discarded" container=dd41d98f15f637588afcb0b7348149f786cc56c9ac7291a637b985a644e50d5e type=CONTAINER_CREATED_EVENT May 27 18:28:50.700405 containerd[1585]: time="2025-05-27T18:28:50.700085727Z" level=warning msg="container event discarded" container=dd41d98f15f637588afcb0b7348149f786cc56c9ac7291a637b985a644e50d5e type=CONTAINER_STARTED_EVENT May 27 18:28:50.773493 containerd[1585]: time="2025-05-27T18:28:50.773381184Z" level=warning msg="container event discarded" container=06bb8fc11aede6dc364995f62a29f8f72bd635eaa5b5ae0c3083d7e13ca8d683 type=CONTAINER_STARTED_EVENT May 27 18:28:51.326361 containerd[1585]: time="2025-05-27T18:28:51.326272331Z" level=warning msg="container event discarded" container=7a139618ba685d17ce1fb937f1d3d4454467f2b6902ed0c60ef05aed00ff03b4 type=CONTAINER_CREATED_EVENT May 27 18:28:51.326361 containerd[1585]: time="2025-05-27T18:28:51.326325352Z" level=warning msg="container event discarded" container=7a139618ba685d17ce1fb937f1d3d4454467f2b6902ed0c60ef05aed00ff03b4 type=CONTAINER_STARTED_EVENT May 27 18:28:52.216568 containerd[1585]: time="2025-05-27T18:28:52.216433482Z" level=warning msg="container event discarded" container=96e4262da3ab3f29203f31f75e081e6b7b76f7a9ceb1bf8848a42eaac4d63409 type=CONTAINER_CREATED_EVENT May 27 18:28:52.216568 containerd[1585]: time="2025-05-27T18:28:52.216519025Z" level=warning msg="container event discarded" container=96e4262da3ab3f29203f31f75e081e6b7b76f7a9ceb1bf8848a42eaac4d63409 type=CONTAINER_STARTED_EVENT May 27 18:28:52.272814 containerd[1585]: time="2025-05-27T18:28:52.272728287Z" level=warning msg="container event discarded" container=74f8b08bddbe8a2da009819735cd033c18cc082e6366f5a0c3105dbbb6ca2772 type=CONTAINER_CREATED_EVENT May 27 18:28:52.357451 containerd[1585]: time="2025-05-27T18:28:52.357269609Z" level=warning msg="container event discarded" container=74f8b08bddbe8a2da009819735cd033c18cc082e6366f5a0c3105dbbb6ca2772 type=CONTAINER_STARTED_EVENT May 27 18:28:53.545380 kubelet[2794]: E0527 18:28:53.545022 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:28:54.155216 systemd[1]: Started sshd@17-172.24.4.229:22-172.24.4.1:54428.service - OpenSSH per-connection server daemon (172.24.4.1:54428). May 27 18:28:54.377757 containerd[1585]: time="2025-05-27T18:28:54.377565220Z" level=warning msg="container event discarded" container=884ef37ac50e561d1e73bd661b9ceea5ef70eae8feeacab07d0e29e36fe5ad73 type=CONTAINER_CREATED_EVENT May 27 18:28:54.378854 containerd[1585]: time="2025-05-27T18:28:54.378730335Z" level=warning msg="container event discarded" container=884ef37ac50e561d1e73bd661b9ceea5ef70eae8feeacab07d0e29e36fe5ad73 type=CONTAINER_STARTED_EVENT May 27 18:28:54.578643 containerd[1585]: time="2025-05-27T18:28:54.578444558Z" level=warning msg="container event discarded" container=70981d974aa26e3cad21bfd5a74db159def57f9e9d1f0204bc9464dc8e07a9ac type=CONTAINER_CREATED_EVENT May 27 18:28:54.578643 containerd[1585]: time="2025-05-27T18:28:54.578573874Z" level=warning msg="container event discarded" container=70981d974aa26e3cad21bfd5a74db159def57f9e9d1f0204bc9464dc8e07a9ac type=CONTAINER_STARTED_EVENT May 27 18:28:54.602659 containerd[1585]: time="2025-05-27T18:28:54.602560266Z" level=warning msg="container event discarded" container=b94a5eb29cebda6baffb8480ef0e4dcc75265ece97f67ffe5599bc294e57f666 type=CONTAINER_CREATED_EVENT May 27 18:28:54.602659 containerd[1585]: time="2025-05-27T18:28:54.602632875Z" level=warning msg="container event discarded" container=b94a5eb29cebda6baffb8480ef0e4dcc75265ece97f67ffe5599bc294e57f666 type=CONTAINER_STARTED_EVENT May 27 18:28:54.622141 containerd[1585]: time="2025-05-27T18:28:54.621970236Z" level=warning msg="container event discarded" container=3f6effd24fcbca4c661c5a1f4e124175e7e772199be3a0eb51d9176cfaf64717 type=CONTAINER_CREATED_EVENT May 27 18:28:54.622141 containerd[1585]: time="2025-05-27T18:28:54.622087630Z" level=warning msg="container event discarded" container=3f6effd24fcbca4c661c5a1f4e124175e7e772199be3a0eb51d9176cfaf64717 type=CONTAINER_STARTED_EVENT May 27 18:28:55.288813 sshd[5847]: Accepted publickey for core from 172.24.4.1 port 54428 ssh2: RSA SHA256:D6TRbEcVa7iPKsFjaVce7/kj7tFlICZCjA4HGLeEPTo May 27 18:28:55.292138 sshd-session[5847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:28:55.306812 systemd-logind[1498]: New session 20 of user core. May 27 18:28:55.319031 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 18:28:56.203962 sshd[5849]: Connection closed by 172.24.4.1 port 54428 May 27 18:28:56.202962 sshd-session[5847]: pam_unix(sshd:session): session closed for user core May 27 18:28:56.226473 systemd[1]: sshd@17-172.24.4.229:22-172.24.4.1:54428.service: Deactivated successfully. May 27 18:28:56.234315 systemd[1]: session-20.scope: Deactivated successfully. May 27 18:28:56.237808 systemd-logind[1498]: Session 20 logged out. Waiting for processes to exit. May 27 18:28:56.252192 systemd[1]: Started sshd@18-172.24.4.229:22-172.24.4.1:54436.service - OpenSSH per-connection server daemon (172.24.4.1:54436). May 27 18:28:56.255841 systemd-logind[1498]: Removed session 20. May 27 18:28:56.861780 containerd[1585]: time="2025-05-27T18:28:56.861491511Z" level=warning msg="container event discarded" container=5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f type=CONTAINER_CREATED_EVENT May 27 18:28:56.984929 containerd[1585]: time="2025-05-27T18:28:56.984813126Z" level=warning msg="container event discarded" container=5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f type=CONTAINER_STARTED_EVENT May 27 18:28:57.458447 sshd[5861]: Accepted publickey for core from 172.24.4.1 port 54436 ssh2: RSA SHA256:D6TRbEcVa7iPKsFjaVce7/kj7tFlICZCjA4HGLeEPTo May 27 18:28:57.462662 sshd-session[5861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:28:57.477856 systemd-logind[1498]: New session 21 of user core. May 27 18:28:57.491083 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 18:28:58.541329 kubelet[2794]: E0527 18:28:58.540584 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:28:58.806915 sshd[5863]: Connection closed by 172.24.4.1 port 54436 May 27 18:28:58.807095 sshd-session[5861]: pam_unix(sshd:session): session closed for user core May 27 18:28:58.815907 systemd[1]: sshd@18-172.24.4.229:22-172.24.4.1:54436.service: Deactivated successfully. May 27 18:28:58.819176 systemd[1]: session-21.scope: Deactivated successfully. May 27 18:28:58.822561 systemd-logind[1498]: Session 21 logged out. Waiting for processes to exit. May 27 18:28:58.826961 systemd[1]: Started sshd@19-172.24.4.229:22-172.24.4.1:54446.service - OpenSSH per-connection server daemon (172.24.4.1:54446). May 27 18:28:58.830174 systemd-logind[1498]: Removed session 21. May 27 18:28:59.989043 sshd[5873]: Accepted publickey for core from 172.24.4.1 port 54446 ssh2: RSA SHA256:D6TRbEcVa7iPKsFjaVce7/kj7tFlICZCjA4HGLeEPTo May 27 18:28:59.991341 sshd-session[5873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:29:00.004680 systemd-logind[1498]: New session 22 of user core. May 27 18:29:00.015042 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 18:29:02.069367 containerd[1585]: time="2025-05-27T18:29:02.068196103Z" level=warning msg="container event discarded" container=bf32561476c7f87042215cfc517b3814e4ecbbf9239899609479ae43922a0153 type=CONTAINER_CREATED_EVENT May 27 18:29:02.218074 containerd[1585]: time="2025-05-27T18:29:02.218003276Z" level=warning msg="container event discarded" container=bf32561476c7f87042215cfc517b3814e4ecbbf9239899609479ae43922a0153 type=CONTAINER_STARTED_EVENT May 27 18:29:02.946729 containerd[1585]: time="2025-05-27T18:29:02.946283381Z" level=warning msg="container event discarded" container=c3ec8f8342890eeb43a92676eb933c67798162728bf80164ea3b1b7546ea7378 type=CONTAINER_CREATED_EVENT May 27 18:29:03.192898 containerd[1585]: time="2025-05-27T18:29:03.192751633Z" level=warning msg="container event discarded" container=c3ec8f8342890eeb43a92676eb933c67798162728bf80164ea3b1b7546ea7378 type=CONTAINER_STARTED_EVENT May 27 18:29:04.165729 sshd[5875]: Connection closed by 172.24.4.1 port 54446 May 27 18:29:04.169158 sshd-session[5873]: pam_unix(sshd:session): session closed for user core May 27 18:29:04.197423 systemd[1]: sshd@19-172.24.4.229:22-172.24.4.1:54446.service: Deactivated successfully. May 27 18:29:04.207374 systemd[1]: session-22.scope: Deactivated successfully. May 27 18:29:04.208159 systemd[1]: session-22.scope: Consumed 1.013s CPU time, 76.7M memory peak. May 27 18:29:04.210948 systemd-logind[1498]: Session 22 logged out. Waiting for processes to exit. May 27 18:29:04.221658 systemd[1]: Started sshd@20-172.24.4.229:22-172.24.4.1:36732.service - OpenSSH per-connection server daemon (172.24.4.1:36732). May 27 18:29:04.226514 systemd-logind[1498]: Removed session 22. May 27 18:29:04.549740 kubelet[2794]: E0527 18:29:04.547983 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:29:05.655239 sshd[5896]: Accepted publickey for core from 172.24.4.1 port 36732 ssh2: RSA SHA256:D6TRbEcVa7iPKsFjaVce7/kj7tFlICZCjA4HGLeEPTo May 27 18:29:05.659010 sshd-session[5896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:29:05.683390 systemd-logind[1498]: New session 23 of user core. May 27 18:29:05.695074 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 18:29:05.859845 containerd[1585]: time="2025-05-27T18:29:05.859065750Z" level=warning msg="container event discarded" container=9c66925581d668555330d65a60e48a1bea14dc14f96806d0c13660d295fcfb96 type=CONTAINER_CREATED_EVENT May 27 18:29:06.206924 containerd[1585]: time="2025-05-27T18:29:06.206770407Z" level=warning msg="container event discarded" container=9c66925581d668555330d65a60e48a1bea14dc14f96806d0c13660d295fcfb96 type=CONTAINER_STARTED_EVENT May 27 18:29:07.194580 sshd[5898]: Connection closed by 172.24.4.1 port 36732 May 27 18:29:07.195227 sshd-session[5896]: pam_unix(sshd:session): session closed for user core May 27 18:29:07.225014 systemd[1]: sshd@20-172.24.4.229:22-172.24.4.1:36732.service: Deactivated successfully. May 27 18:29:07.235008 systemd[1]: session-23.scope: Deactivated successfully. May 27 18:29:07.240334 systemd-logind[1498]: Session 23 logged out. Waiting for processes to exit. May 27 18:29:07.255454 systemd[1]: Started sshd@21-172.24.4.229:22-172.24.4.1:36744.service - OpenSSH per-connection server daemon (172.24.4.1:36744). May 27 18:29:07.261548 systemd-logind[1498]: Removed session 23. May 27 18:29:08.264867 containerd[1585]: time="2025-05-27T18:29:08.264774615Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f\" id:\"d6d5d1d8cfcace8f2cacd2f7946355555fe630d32853d6b430759df741bfd361\" pid:5922 exited_at:{seconds:1748370548 nanos:263586132}" May 27 18:29:08.750768 sshd[5908]: Accepted publickey for core from 172.24.4.1 port 36744 ssh2: RSA SHA256:D6TRbEcVa7iPKsFjaVce7/kj7tFlICZCjA4HGLeEPTo May 27 18:29:08.753860 sshd-session[5908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:29:08.776467 systemd-logind[1498]: New session 24 of user core. May 27 18:29:08.785272 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 18:29:08.991840 containerd[1585]: time="2025-05-27T18:29:08.991607822Z" level=warning msg="container event discarded" container=bd36727705ded3886de863822826b236735a1b07f25bc65d1c92961a7a6b0094 type=CONTAINER_CREATED_EVENT May 27 18:29:09.096596 containerd[1585]: time="2025-05-27T18:29:09.096162410Z" level=warning msg="container event discarded" container=bd36727705ded3886de863822826b236735a1b07f25bc65d1c92961a7a6b0094 type=CONTAINER_STARTED_EVENT May 27 18:29:09.512891 sshd[5933]: Connection closed by 172.24.4.1 port 36744 May 27 18:29:09.513610 sshd-session[5908]: pam_unix(sshd:session): session closed for user core May 27 18:29:09.524240 systemd[1]: sshd@21-172.24.4.229:22-172.24.4.1:36744.service: Deactivated successfully. May 27 18:29:09.530058 systemd[1]: session-24.scope: Deactivated successfully. May 27 18:29:09.539582 systemd-logind[1498]: Session 24 logged out. Waiting for processes to exit. May 27 18:29:09.545027 systemd-logind[1498]: Removed session 24. May 27 18:29:13.537010 kubelet[2794]: E0527 18:29:13.536919 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:29:14.541301 systemd[1]: Started sshd@22-172.24.4.229:22-172.24.4.1:49550.service - OpenSSH per-connection server daemon (172.24.4.1:49550). May 27 18:29:15.836480 sshd[5956]: Accepted publickey for core from 172.24.4.1 port 49550 ssh2: RSA SHA256:D6TRbEcVa7iPKsFjaVce7/kj7tFlICZCjA4HGLeEPTo May 27 18:29:15.839044 sshd-session[5956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:29:15.847762 systemd-logind[1498]: New session 25 of user core. May 27 18:29:15.851896 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 18:29:16.576124 sshd[5958]: Connection closed by 172.24.4.1 port 49550 May 27 18:29:16.577158 sshd-session[5956]: pam_unix(sshd:session): session closed for user core May 27 18:29:16.583503 systemd[1]: sshd@22-172.24.4.229:22-172.24.4.1:49550.service: Deactivated successfully. May 27 18:29:16.588985 systemd[1]: session-25.scope: Deactivated successfully. May 27 18:29:16.590112 systemd-logind[1498]: Session 25 logged out. Waiting for processes to exit. May 27 18:29:16.592060 systemd-logind[1498]: Removed session 25. May 27 18:29:17.025167 containerd[1585]: time="2025-05-27T18:29:17.025063016Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa387d247c6751a9fd3e4bb2659b7f075b4326082ca83aebfaca596f8e54d0a6\" id:\"d6238715aaeb79d7b51c4c84c301d52db191f3f3a66ae1d80fd0c66208c2bb34\" pid:5982 exited_at:{seconds:1748370557 nanos:24251190}" May 27 18:29:17.540118 kubelet[2794]: E0527 18:29:17.539793 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:29:21.614007 systemd[1]: Started sshd@23-172.24.4.229:22-172.24.4.1:49552.service - OpenSSH per-connection server daemon (172.24.4.1:49552). May 27 18:29:23.034777 sshd[5994]: Accepted publickey for core from 172.24.4.1 port 49552 ssh2: RSA SHA256:D6TRbEcVa7iPKsFjaVce7/kj7tFlICZCjA4HGLeEPTo May 27 18:29:23.038475 sshd-session[5994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:29:23.053945 systemd-logind[1498]: New session 26 of user core. May 27 18:29:23.058928 systemd[1]: Started session-26.scope - Session 26 of User core. May 27 18:29:23.700833 sshd[5996]: Connection closed by 172.24.4.1 port 49552 May 27 18:29:23.703914 sshd-session[5994]: pam_unix(sshd:session): session closed for user core May 27 18:29:23.708973 systemd[1]: sshd@23-172.24.4.229:22-172.24.4.1:49552.service: Deactivated successfully. May 27 18:29:23.712640 systemd[1]: session-26.scope: Deactivated successfully. May 27 18:29:23.714380 systemd-logind[1498]: Session 26 logged out. Waiting for processes to exit. May 27 18:29:23.716888 systemd-logind[1498]: Removed session 26. May 27 18:29:24.537742 kubelet[2794]: E0527 18:29:24.537497 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:29:28.538110 kubelet[2794]: E0527 18:29:28.537923 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:29:28.727088 systemd[1]: Started sshd@24-172.24.4.229:22-172.24.4.1:34076.service - OpenSSH per-connection server daemon (172.24.4.1:34076). May 27 18:29:29.947619 sshd[6008]: Accepted publickey for core from 172.24.4.1 port 34076 ssh2: RSA SHA256:D6TRbEcVa7iPKsFjaVce7/kj7tFlICZCjA4HGLeEPTo May 27 18:29:29.953122 sshd-session[6008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:29:29.974093 systemd-logind[1498]: New session 27 of user core. May 27 18:29:29.980407 systemd[1]: Started session-27.scope - Session 27 of User core. May 27 18:29:30.783212 sshd[6010]: Connection closed by 172.24.4.1 port 34076 May 27 18:29:30.784171 sshd-session[6008]: pam_unix(sshd:session): session closed for user core May 27 18:29:30.794541 systemd-logind[1498]: Session 27 logged out. Waiting for processes to exit. May 27 18:29:30.796996 systemd[1]: sshd@24-172.24.4.229:22-172.24.4.1:34076.service: Deactivated successfully. May 27 18:29:30.806030 systemd[1]: session-27.scope: Deactivated successfully. May 27 18:29:30.810859 systemd-logind[1498]: Removed session 27. May 27 18:29:35.608367 containerd[1585]: time="2025-05-27T18:29:35.608234804Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f\" id:\"f60df5c9548e889665f16f006c2ee94d9f1924e16b0a2c0b72d2537bca92cf4e\" pid:6035 exited_at:{seconds:1748370575 nanos:607301203}" May 27 18:29:35.814830 systemd[1]: Started sshd@25-172.24.4.229:22-172.24.4.1:36176.service - OpenSSH per-connection server daemon (172.24.4.1:36176). May 27 18:29:37.410741 sshd[6046]: Accepted publickey for core from 172.24.4.1 port 36176 ssh2: RSA SHA256:D6TRbEcVa7iPKsFjaVce7/kj7tFlICZCjA4HGLeEPTo May 27 18:29:37.414331 sshd-session[6046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:29:37.427435 systemd-logind[1498]: New session 28 of user core. May 27 18:29:37.439179 systemd[1]: Started session-28.scope - Session 28 of User core. May 27 18:29:38.245855 containerd[1585]: time="2025-05-27T18:29:38.245795987Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f\" id:\"ddfb0cf2c84e3f8d877b51d85de69a712ce3e182a7e75c96a45c62ee11bc959f\" pid:6068 exited_at:{seconds:1748370578 nanos:245246052}" May 27 18:29:38.278147 sshd[6048]: Connection closed by 172.24.4.1 port 36176 May 27 18:29:38.278588 sshd-session[6046]: pam_unix(sshd:session): session closed for user core May 27 18:29:38.284214 systemd-logind[1498]: Session 28 logged out. Waiting for processes to exit. May 27 18:29:38.286029 systemd[1]: sshd@25-172.24.4.229:22-172.24.4.1:36176.service: Deactivated successfully. May 27 18:29:38.288672 systemd[1]: session-28.scope: Deactivated successfully. May 27 18:29:38.290886 systemd-logind[1498]: Removed session 28. May 27 18:29:39.539163 kubelet[2794]: E0527 18:29:39.538954 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:29:40.545319 kubelet[2794]: E0527 18:29:40.545103 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:29:43.327193 systemd[1]: Started sshd@26-172.24.4.229:22-172.24.4.1:36178.service - OpenSSH per-connection server daemon (172.24.4.1:36178). May 27 18:29:44.596796 sshd[6084]: Accepted publickey for core from 172.24.4.1 port 36178 ssh2: RSA SHA256:D6TRbEcVa7iPKsFjaVce7/kj7tFlICZCjA4HGLeEPTo May 27 18:29:44.600292 sshd-session[6084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:29:44.621876 systemd-logind[1498]: New session 29 of user core. May 27 18:29:44.632054 systemd[1]: Started session-29.scope - Session 29 of User core. May 27 18:29:45.444213 sshd[6086]: Connection closed by 172.24.4.1 port 36178 May 27 18:29:45.446055 sshd-session[6084]: pam_unix(sshd:session): session closed for user core May 27 18:29:45.454589 systemd[1]: sshd@26-172.24.4.229:22-172.24.4.1:36178.service: Deactivated successfully. May 27 18:29:45.465174 systemd[1]: session-29.scope: Deactivated successfully. May 27 18:29:45.469180 systemd-logind[1498]: Session 29 logged out. Waiting for processes to exit. May 27 18:29:45.473584 systemd-logind[1498]: Removed session 29. May 27 18:29:47.079116 containerd[1585]: time="2025-05-27T18:29:47.078587177Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa387d247c6751a9fd3e4bb2659b7f075b4326082ca83aebfaca596f8e54d0a6\" id:\"152b9f75773ce537b012418745323dfa477f1967f2d5bbc62d3f0c8edb811878\" pid:6108 exited_at:{seconds:1748370587 nanos:74821300}" May 27 18:29:52.540354 containerd[1585]: time="2025-05-27T18:29:52.540180024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 18:29:52.890598 containerd[1585]: time="2025-05-27T18:29:52.890059258Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:29:52.892511 containerd[1585]: time="2025-05-27T18:29:52.892227210Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:29:52.893742 containerd[1585]: time="2025-05-27T18:29:52.892412716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 18:29:52.893947 kubelet[2794]: E0527 18:29:52.893314 2794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 18:29:52.893947 kubelet[2794]: E0527 18:29:52.893548 2794 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 18:29:52.895984 kubelet[2794]: E0527 18:29:52.895293 2794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7721ee7d67474ccca7a05148d0f91fc5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wqlkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f7dd7cb5-258cr_calico-system(107d1d0f-9dd5-43de-81cb-0f8c43731395): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:29:52.900121 containerd[1585]: time="2025-05-27T18:29:52.899471575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 18:29:53.263896 containerd[1585]: time="2025-05-27T18:29:53.263744523Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:29:53.266239 containerd[1585]: time="2025-05-27T18:29:53.265986827Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:29:53.266527 containerd[1585]: time="2025-05-27T18:29:53.266042934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 18:29:53.266961 kubelet[2794]: E0527 18:29:53.266631 2794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 18:29:53.266961 kubelet[2794]: E0527 18:29:53.266825 2794 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 18:29:53.267665 kubelet[2794]: E0527 18:29:53.267177 2794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqlkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f7dd7cb5-258cr_calico-system(107d1d0f-9dd5-43de-81cb-0f8c43731395): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:29:53.269422 kubelet[2794]: E0527 18:29:53.269299 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:29:54.539325 containerd[1585]: time="2025-05-27T18:29:54.539189375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 18:29:54.875869 containerd[1585]: time="2025-05-27T18:29:54.875540987Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:29:54.878076 containerd[1585]: time="2025-05-27T18:29:54.877973197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 18:29:54.878350 containerd[1585]: time="2025-05-27T18:29:54.878087827Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:29:54.879078 kubelet[2794]: E0527 18:29:54.878870 2794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 18:29:54.881255 kubelet[2794]: E0527 18:29:54.879997 2794 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 18:29:54.881255 kubelet[2794]: E0527 18:29:54.880737 2794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k62cm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-pdb49_calico-system(bbd4fd77-1837-421d-a4cd-17332246cc0a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:29:54.883236 kubelet[2794]: E0527 18:29:54.882989 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:30:05.539911 kubelet[2794]: E0527 18:30:05.539813 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:30:05.546039 kubelet[2794]: E0527 18:30:05.545841 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:30:08.284331 containerd[1585]: time="2025-05-27T18:30:08.284101465Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f\" id:\"c983a7869c6225da9eee5d9d4730d172bb2bd5fdcae515aa74a67f862cb2b8dd\" pid:6148 exited_at:{seconds:1748370608 nanos:283477986}" May 27 18:30:16.541185 kubelet[2794]: E0527 18:30:16.540994 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:30:17.090205 containerd[1585]: time="2025-05-27T18:30:17.090132001Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa387d247c6751a9fd3e4bb2659b7f075b4326082ca83aebfaca596f8e54d0a6\" id:\"1b0317726178cc8af3d6cae652c52b0e20646b6ab2a1f1dd1d8bea8a7e5b541e\" pid:6178 exited_at:{seconds:1748370617 nanos:89259885}" May 27 18:30:18.542100 kubelet[2794]: E0527 18:30:18.541954 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:30:27.541706 kubelet[2794]: E0527 18:30:27.541363 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:30:33.550404 kubelet[2794]: E0527 18:30:33.547735 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:30:35.633063 containerd[1585]: time="2025-05-27T18:30:35.631442293Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f\" id:\"587bae924a6dad68b45d850d0837e82d72c7dd5668284058eb56b9c6b6cad4a1\" pid:6201 exited_at:{seconds:1748370635 nanos:631045233}" May 27 18:30:38.277211 containerd[1585]: time="2025-05-27T18:30:38.277113677Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f\" id:\"32b580c6bcd852c7b652026f7c1e1b71cc6ffe3d8cfa549dae288e163cb2816f\" pid:6222 exited_at:{seconds:1748370638 nanos:274390846}" May 27 18:30:39.536599 kubelet[2794]: E0527 18:30:39.536397 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:30:47.133015 containerd[1585]: time="2025-05-27T18:30:47.132283206Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa387d247c6751a9fd3e4bb2659b7f075b4326082ca83aebfaca596f8e54d0a6\" id:\"67ac19b7b5445525401406ba8640d318a6ec0ba325e9889669d44492d0064f64\" pid:6245 exited_at:{seconds:1748370647 nanos:131173831}" May 27 18:30:47.543039 kubelet[2794]: E0527 18:30:47.542546 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:30:53.540042 kubelet[2794]: E0527 18:30:53.539256 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:31:02.539580 kubelet[2794]: E0527 18:31:02.539087 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:31:04.537118 kubelet[2794]: E0527 18:31:04.536859 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:31:08.272439 containerd[1585]: time="2025-05-27T18:31:08.272368900Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f\" id:\"5a451975180051dc5c05a2b6e73a6753081fe5ae696dcb406c2ae212fd1fcda7\" pid:6270 exited_at:{seconds:1748370668 nanos:270979212}" May 27 18:31:16.538312 kubelet[2794]: E0527 18:31:16.538146 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:31:16.541119 kubelet[2794]: E0527 18:31:16.539096 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:31:17.115953 containerd[1585]: time="2025-05-27T18:31:17.115829665Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa387d247c6751a9fd3e4bb2659b7f075b4326082ca83aebfaca596f8e54d0a6\" id:\"ee4ec6540c4d85e90e0d7c4d47c65a4e5d46cd5603df89ff4181de81f0f03c93\" pid:6294 exited_at:{seconds:1748370677 nanos:113793958}" May 27 18:31:27.542345 kubelet[2794]: E0527 18:31:27.540983 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:31:30.539954 kubelet[2794]: E0527 18:31:30.538945 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:31:35.581460 containerd[1585]: time="2025-05-27T18:31:35.581185866Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f\" id:\"b2de427ebb9f5a7e7ceb283ffcf48db381cdc7854c6042cfa3a1461f85552cc9\" pid:6317 exited_at:{seconds:1748370695 nanos:580130466}" May 27 18:31:38.260512 containerd[1585]: time="2025-05-27T18:31:38.260452329Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5aef368c4a95271ce37b9a677a96840bc90add35964beb96e0d6ced031d9d66f\" id:\"6440db3f3df8ac90b2d32626bc03d29801a4eb151bd6792698e1ecd1b164d58b\" pid:6339 exited_at:{seconds:1748370698 nanos:260072422}" May 27 18:31:41.543502 kubelet[2794]: E0527 18:31:41.542824 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f7dd7cb5-258cr" podUID="107d1d0f-9dd5-43de-81cb-0f8c43731395" May 27 18:31:42.538086 kubelet[2794]: E0527 18:31:42.537369 2794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-pdb49" podUID="bbd4fd77-1837-421d-a4cd-17332246cc0a" May 27 18:31:47.125076 containerd[1585]: time="2025-05-27T18:31:47.124996158Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa387d247c6751a9fd3e4bb2659b7f075b4326082ca83aebfaca596f8e54d0a6\" id:\"10abf10edefc25018307ebaa06ac53d920e84a6435100243cecd81618202ff12\" pid:6383 exited_at:{seconds:1748370707 nanos:124330802}"