May 27 03:21:20.857280 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 27 01:09:43 -00 2025 May 27 03:21:20.857320 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:21:20.857329 kernel: BIOS-provided physical RAM map: May 27 03:21:20.857335 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 27 03:21:20.857341 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 27 03:21:20.857347 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 27 03:21:20.857356 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable May 27 03:21:20.857362 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved May 27 03:21:20.857368 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 27 03:21:20.857374 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 27 03:21:20.857382 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 03:21:20.857396 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 27 03:21:20.857405 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 27 03:21:20.857413 kernel: NX (Execute Disable) protection: active May 27 03:21:20.857427 kernel: APIC: Static calls initialized May 27 03:21:20.857435 kernel: SMBIOS 3.0.0 present. May 27 03:21:20.857444 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 May 27 03:21:20.857452 kernel: DMI: Memory slots populated: 1/1 May 27 03:21:20.857461 kernel: Hypervisor detected: KVM May 27 03:21:20.857469 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 27 03:21:20.857477 kernel: kvm-clock: using sched offset of 4576419486 cycles May 27 03:21:20.857486 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 27 03:21:20.857497 kernel: tsc: Detected 2495.312 MHz processor May 27 03:21:20.857506 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 27 03:21:20.857515 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 27 03:21:20.857524 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 May 27 03:21:20.857532 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 27 03:21:20.857541 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 27 03:21:20.857549 kernel: Using GB pages for direct mapping May 27 03:21:20.857558 kernel: ACPI: Early table checksum verification disabled May 27 03:21:20.857567 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) May 27 03:21:20.857579 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:21:20.857588 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:21:20.857597 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:21:20.857605 kernel: ACPI: FACS 0x000000007CFE0000 000040 May 27 03:21:20.857615 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:21:20.857623 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:21:20.857631 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:21:20.857640 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:21:20.857648 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] May 27 03:21:20.857662 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] May 27 03:21:20.857671 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] May 27 03:21:20.857680 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] May 27 03:21:20.857689 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] May 27 03:21:20.857698 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] May 27 03:21:20.857710 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] May 27 03:21:20.857719 kernel: No NUMA configuration found May 27 03:21:20.857725 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] May 27 03:21:20.857732 kernel: NODE_DATA(0) allocated [mem 0x7cfd4dc0-0x7cfdbfff] May 27 03:21:20.857739 kernel: Zone ranges: May 27 03:21:20.857746 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 27 03:21:20.857753 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] May 27 03:21:20.857760 kernel: Normal empty May 27 03:21:20.857766 kernel: Device empty May 27 03:21:20.857775 kernel: Movable zone start for each node May 27 03:21:20.857781 kernel: Early memory node ranges May 27 03:21:20.857788 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 27 03:21:20.857795 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] May 27 03:21:20.857802 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] May 27 03:21:20.857808 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 03:21:20.857815 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 27 03:21:20.857822 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 27 03:21:20.857829 kernel: ACPI: PM-Timer IO Port: 0x608 May 27 03:21:20.857835 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 27 03:21:20.857843 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 27 03:21:20.857850 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 27 03:21:20.857857 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 27 03:21:20.857866 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 27 03:21:20.857881 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 27 03:21:20.857892 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 27 03:21:20.857901 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 27 03:21:20.857908 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 27 03:21:20.857914 kernel: CPU topo: Max. logical packages: 1 May 27 03:21:20.857924 kernel: CPU topo: Max. logical dies: 1 May 27 03:21:20.857931 kernel: CPU topo: Max. dies per package: 1 May 27 03:21:20.857938 kernel: CPU topo: Max. threads per core: 1 May 27 03:21:20.857944 kernel: CPU topo: Num. cores per package: 2 May 27 03:21:20.857951 kernel: CPU topo: Num. threads per package: 2 May 27 03:21:20.857957 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 27 03:21:20.857964 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 27 03:21:20.857971 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 27 03:21:20.857978 kernel: Booting paravirtualized kernel on KVM May 27 03:21:20.857985 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 27 03:21:20.857993 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 27 03:21:20.858000 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 27 03:21:20.858007 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 27 03:21:20.858014 kernel: pcpu-alloc: [0] 0 1 May 27 03:21:20.858020 kernel: kvm-guest: PV spinlocks disabled, no host support May 27 03:21:20.858029 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:21:20.858038 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 03:21:20.858056 kernel: random: crng init done May 27 03:21:20.858066 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 03:21:20.858075 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 27 03:21:20.858084 kernel: Fallback order for Node 0: 0 May 27 03:21:20.858093 kernel: Built 1 zonelists, mobility grouping on. Total pages: 511866 May 27 03:21:20.858103 kernel: Policy zone: DMA32 May 27 03:21:20.858111 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 03:21:20.858118 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 27 03:21:20.858125 kernel: ftrace: allocating 40081 entries in 157 pages May 27 03:21:20.858132 kernel: ftrace: allocated 157 pages with 5 groups May 27 03:21:20.858140 kernel: Dynamic Preempt: voluntary May 27 03:21:20.858147 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 03:21:20.858155 kernel: rcu: RCU event tracing is enabled. May 27 03:21:20.858162 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 27 03:21:20.858169 kernel: Trampoline variant of Tasks RCU enabled. May 27 03:21:20.858176 kernel: Rude variant of Tasks RCU enabled. May 27 03:21:20.858182 kernel: Tracing variant of Tasks RCU enabled. May 27 03:21:20.858189 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 03:21:20.858196 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 27 03:21:20.858204 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 03:21:20.858211 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 03:21:20.858218 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 03:21:20.858239 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 27 03:21:20.858246 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 03:21:20.858254 kernel: Console: colour VGA+ 80x25 May 27 03:21:20.858265 kernel: printk: legacy console [tty0] enabled May 27 03:21:20.858283 kernel: printk: legacy console [ttyS0] enabled May 27 03:21:20.858294 kernel: ACPI: Core revision 20240827 May 27 03:21:20.858323 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 27 03:21:20.858331 kernel: APIC: Switch to symmetric I/O mode setup May 27 03:21:20.858338 kernel: x2apic enabled May 27 03:21:20.858347 kernel: APIC: Switched APIC routing to: physical x2apic May 27 03:21:20.858356 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 27 03:21:20.858373 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f7ed49df2, max_idle_ns: 440795247253 ns May 27 03:21:20.858383 kernel: Calibrating delay loop (skipped) preset value.. 4990.62 BogoMIPS (lpj=2495312) May 27 03:21:20.858393 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 27 03:21:20.858403 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 27 03:21:20.858417 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 27 03:21:20.858427 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 27 03:21:20.858437 kernel: Spectre V2 : Mitigation: Retpolines May 27 03:21:20.858445 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 27 03:21:20.858452 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 27 03:21:20.858459 kernel: RETBleed: Mitigation: untrained return thunk May 27 03:21:20.858466 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 27 03:21:20.858476 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 27 03:21:20.858483 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 27 03:21:20.858491 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 27 03:21:20.858498 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 27 03:21:20.858506 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 27 03:21:20.858514 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 27 03:21:20.858522 kernel: Freeing SMP alternatives memory: 32K May 27 03:21:20.858529 kernel: pid_max: default: 32768 minimum: 301 May 27 03:21:20.858536 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 03:21:20.858545 kernel: landlock: Up and running. May 27 03:21:20.858552 kernel: SELinux: Initializing. May 27 03:21:20.858560 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 27 03:21:20.858568 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 27 03:21:20.858575 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) May 27 03:21:20.858583 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 27 03:21:20.858590 kernel: ... version: 0 May 27 03:21:20.858597 kernel: ... bit width: 48 May 27 03:21:20.858605 kernel: ... generic registers: 6 May 27 03:21:20.858614 kernel: ... value mask: 0000ffffffffffff May 27 03:21:20.858621 kernel: ... max period: 00007fffffffffff May 27 03:21:20.858628 kernel: ... fixed-purpose events: 0 May 27 03:21:20.858636 kernel: ... event mask: 000000000000003f May 27 03:21:20.858643 kernel: signal: max sigframe size: 1776 May 27 03:21:20.858651 kernel: rcu: Hierarchical SRCU implementation. May 27 03:21:20.858659 kernel: rcu: Max phase no-delay instances is 400. May 27 03:21:20.858666 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 03:21:20.858673 kernel: smp: Bringing up secondary CPUs ... May 27 03:21:20.858682 kernel: smpboot: x86: Booting SMP configuration: May 27 03:21:20.858689 kernel: .... node #0, CPUs: #1 May 27 03:21:20.858696 kernel: smp: Brought up 1 node, 2 CPUs May 27 03:21:20.858703 kernel: smpboot: Total of 2 processors activated (9981.24 BogoMIPS) May 27 03:21:20.858710 kernel: Memory: 1917780K/2047464K available (14336K kernel code, 2430K rwdata, 9952K rodata, 54416K init, 2552K bss, 125140K reserved, 0K cma-reserved) May 27 03:21:20.858717 kernel: devtmpfs: initialized May 27 03:21:20.858725 kernel: x86/mm: Memory block size: 128MB May 27 03:21:20.858732 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 03:21:20.858739 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 27 03:21:20.858747 kernel: pinctrl core: initialized pinctrl subsystem May 27 03:21:20.858754 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 03:21:20.858762 kernel: audit: initializing netlink subsys (disabled) May 27 03:21:20.858769 kernel: audit: type=2000 audit(1748316076.888:1): state=initialized audit_enabled=0 res=1 May 27 03:21:20.858776 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 03:21:20.858783 kernel: thermal_sys: Registered thermal governor 'user_space' May 27 03:21:20.858790 kernel: cpuidle: using governor menu May 27 03:21:20.858798 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 03:21:20.858805 kernel: dca service started, version 1.12.1 May 27 03:21:20.858813 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] May 27 03:21:20.858820 kernel: PCI: Using configuration type 1 for base access May 27 03:21:20.858827 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 27 03:21:20.858834 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 03:21:20.858842 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 27 03:21:20.858849 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 03:21:20.858856 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 27 03:21:20.858863 kernel: ACPI: Added _OSI(Module Device) May 27 03:21:20.858870 kernel: ACPI: Added _OSI(Processor Device) May 27 03:21:20.858878 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 03:21:20.858885 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 03:21:20.858893 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 03:21:20.858900 kernel: ACPI: Interpreter enabled May 27 03:21:20.858907 kernel: ACPI: PM: (supports S0 S5) May 27 03:21:20.858914 kernel: ACPI: Using IOAPIC for interrupt routing May 27 03:21:20.858921 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 27 03:21:20.858928 kernel: PCI: Using E820 reservations for host bridge windows May 27 03:21:20.858935 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 27 03:21:20.858943 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 03:21:20.859069 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 27 03:21:20.859140 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 27 03:21:20.859204 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 27 03:21:20.859213 kernel: PCI host bridge to bus 0000:00 May 27 03:21:20.859341 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 27 03:21:20.859404 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 27 03:21:20.859466 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 27 03:21:20.859524 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] May 27 03:21:20.859580 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 27 03:21:20.859636 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 27 03:21:20.859697 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 03:21:20.859786 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 27 03:21:20.859900 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint May 27 03:21:20.859976 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfb800000-0xfbffffff pref] May 27 03:21:20.860045 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfd200000-0xfd203fff 64bit pref] May 27 03:21:20.860110 kernel: pci 0000:00:01.0: BAR 4 [mem 0xfea10000-0xfea10fff] May 27 03:21:20.860185 kernel: pci 0000:00:01.0: ROM [mem 0xfea00000-0xfea0ffff pref] May 27 03:21:20.860270 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 27 03:21:20.860397 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port May 27 03:21:20.860503 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea11000-0xfea11fff] May 27 03:21:20.860605 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 27 03:21:20.860694 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] May 27 03:21:20.860762 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] May 27 03:21:20.860839 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port May 27 03:21:20.864440 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea12000-0xfea12fff] May 27 03:21:20.864521 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 27 03:21:20.864593 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] May 27 03:21:20.864660 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] May 27 03:21:20.864736 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port May 27 03:21:20.864805 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea13000-0xfea13fff] May 27 03:21:20.864871 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 27 03:21:20.864938 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] May 27 03:21:20.865003 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] May 27 03:21:20.865080 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port May 27 03:21:20.865163 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea14000-0xfea14fff] May 27 03:21:20.865249 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 27 03:21:20.865346 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] May 27 03:21:20.865443 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] May 27 03:21:20.865539 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port May 27 03:21:20.865610 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea15000-0xfea15fff] May 27 03:21:20.865680 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 27 03:21:20.865747 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] May 27 03:21:20.865821 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] May 27 03:21:20.865911 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port May 27 03:21:20.865988 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea16000-0xfea16fff] May 27 03:21:20.866055 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 27 03:21:20.866121 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] May 27 03:21:20.866198 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] May 27 03:21:20.867391 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port May 27 03:21:20.867483 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea17000-0xfea17fff] May 27 03:21:20.867553 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 27 03:21:20.867622 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] May 27 03:21:20.867689 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] May 27 03:21:20.867768 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port May 27 03:21:20.867835 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea18000-0xfea18fff] May 27 03:21:20.867901 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 27 03:21:20.867966 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] May 27 03:21:20.868032 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] May 27 03:21:20.868105 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port May 27 03:21:20.868173 kernel: pci 0000:00:03.0: BAR 0 [mem 0xfea19000-0xfea19fff] May 27 03:21:20.868276 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 27 03:21:20.868390 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] May 27 03:21:20.868504 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] May 27 03:21:20.868602 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 27 03:21:20.868672 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 27 03:21:20.868747 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 27 03:21:20.868820 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc040-0xc05f] May 27 03:21:20.868903 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea1a000-0xfea1afff] May 27 03:21:20.868979 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 27 03:21:20.869046 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] May 27 03:21:20.869124 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint May 27 03:21:20.869194 kernel: pci 0000:01:00.0: BAR 1 [mem 0xfe880000-0xfe880fff] May 27 03:21:20.869281 kernel: pci 0000:01:00.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] May 27 03:21:20.871392 kernel: pci 0000:01:00.0: ROM [mem 0xfe800000-0xfe87ffff pref] May 27 03:21:20.871484 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 27 03:21:20.871596 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint May 27 03:21:20.871674 kernel: pci 0000:02:00.0: BAR 0 [mem 0xfe600000-0xfe603fff 64bit] May 27 03:21:20.871742 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 27 03:21:20.871821 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint May 27 03:21:20.871890 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfe400000-0xfe400fff] May 27 03:21:20.871975 kernel: pci 0000:03:00.0: BAR 4 [mem 0xfcc00000-0xfcc03fff 64bit pref] May 27 03:21:20.872066 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 27 03:21:20.872146 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint May 27 03:21:20.872219 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] May 27 03:21:20.872332 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 27 03:21:20.872535 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint May 27 03:21:20.872806 kernel: pci 0000:05:00.0: BAR 1 [mem 0xfe000000-0xfe000fff] May 27 03:21:20.873032 kernel: pci 0000:05:00.0: BAR 4 [mem 0xfc800000-0xfc803fff 64bit pref] May 27 03:21:20.873117 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 27 03:21:20.873195 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint May 27 03:21:20.877335 kernel: pci 0000:06:00.0: BAR 1 [mem 0xfde00000-0xfde00fff] May 27 03:21:20.877430 kernel: pci 0000:06:00.0: BAR 4 [mem 0xfc600000-0xfc603fff 64bit pref] May 27 03:21:20.877502 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 27 03:21:20.877512 kernel: acpiphp: Slot [0] registered May 27 03:21:20.877612 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint May 27 03:21:20.877686 kernel: pci 0000:07:00.0: BAR 1 [mem 0xfdc80000-0xfdc80fff] May 27 03:21:20.877811 kernel: pci 0000:07:00.0: BAR 4 [mem 0xfc400000-0xfc403fff 64bit pref] May 27 03:21:20.877892 kernel: pci 0000:07:00.0: ROM [mem 0xfdc00000-0xfdc7ffff pref] May 27 03:21:20.877962 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 27 03:21:20.877972 kernel: acpiphp: Slot [0-2] registered May 27 03:21:20.878038 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 27 03:21:20.878051 kernel: acpiphp: Slot [0-3] registered May 27 03:21:20.878117 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 27 03:21:20.878128 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 27 03:21:20.878135 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 27 03:21:20.878143 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 27 03:21:20.878150 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 27 03:21:20.878158 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 27 03:21:20.878165 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 27 03:21:20.878172 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 27 03:21:20.878182 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 27 03:21:20.878189 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 27 03:21:20.878197 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 27 03:21:20.878204 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 27 03:21:20.878211 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 27 03:21:20.878219 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 27 03:21:20.878239 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 27 03:21:20.878248 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 27 03:21:20.878256 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 27 03:21:20.878273 kernel: iommu: Default domain type: Translated May 27 03:21:20.878285 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 27 03:21:20.878295 kernel: PCI: Using ACPI for IRQ routing May 27 03:21:20.878817 kernel: PCI: pci_cache_line_size set to 64 bytes May 27 03:21:20.878835 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 27 03:21:20.878845 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] May 27 03:21:20.878937 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 27 03:21:20.879020 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 27 03:21:20.879095 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 27 03:21:20.879105 kernel: vgaarb: loaded May 27 03:21:20.879113 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 27 03:21:20.879121 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 27 03:21:20.879129 kernel: clocksource: Switched to clocksource kvm-clock May 27 03:21:20.879137 kernel: VFS: Disk quotas dquot_6.6.0 May 27 03:21:20.879145 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 03:21:20.879153 kernel: pnp: PnP ACPI init May 27 03:21:20.879240 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 27 03:21:20.879256 kernel: pnp: PnP ACPI: found 5 devices May 27 03:21:20.879263 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 27 03:21:20.879271 kernel: NET: Registered PF_INET protocol family May 27 03:21:20.879278 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 03:21:20.879286 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 27 03:21:20.879294 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 03:21:20.879329 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 27 03:21:20.879337 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 27 03:21:20.879346 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 27 03:21:20.879354 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 27 03:21:20.879361 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 27 03:21:20.879368 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 03:21:20.879376 kernel: NET: Registered PF_XDP protocol family May 27 03:21:20.879449 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 27 03:21:20.879527 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 27 03:21:20.879598 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 27 03:21:20.879665 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff]: assigned May 27 03:21:20.879737 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff]: assigned May 27 03:21:20.879832 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff]: assigned May 27 03:21:20.879905 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 27 03:21:20.879981 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] May 27 03:21:20.880050 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] May 27 03:21:20.880119 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 27 03:21:20.880188 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] May 27 03:21:20.880270 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] May 27 03:21:20.881577 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 27 03:21:20.881664 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] May 27 03:21:20.881753 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] May 27 03:21:20.881850 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 27 03:21:20.881921 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] May 27 03:21:20.881988 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] May 27 03:21:20.882061 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 27 03:21:20.882131 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] May 27 03:21:20.882198 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] May 27 03:21:20.882365 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 27 03:21:20.882456 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] May 27 03:21:20.882526 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] May 27 03:21:20.882598 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 27 03:21:20.882675 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] May 27 03:21:20.882748 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] May 27 03:21:20.882826 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] May 27 03:21:20.882907 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 27 03:21:20.882992 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] May 27 03:21:20.883062 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] May 27 03:21:20.883129 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] May 27 03:21:20.883197 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 27 03:21:20.883282 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] May 27 03:21:20.883383 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] May 27 03:21:20.883451 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] May 27 03:21:20.883518 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 27 03:21:20.883577 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 27 03:21:20.883635 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 27 03:21:20.883693 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] May 27 03:21:20.883750 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 27 03:21:20.883808 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 27 03:21:20.883884 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] May 27 03:21:20.883948 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] May 27 03:21:20.884024 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] May 27 03:21:20.884089 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] May 27 03:21:20.884159 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] May 27 03:21:20.884233 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] May 27 03:21:20.884319 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] May 27 03:21:20.884403 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] May 27 03:21:20.884473 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] May 27 03:21:20.884541 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] May 27 03:21:20.884609 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] May 27 03:21:20.884672 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] May 27 03:21:20.884745 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] May 27 03:21:20.884817 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] May 27 03:21:20.885109 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] May 27 03:21:20.885186 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] May 27 03:21:20.885263 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] May 27 03:21:20.885351 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] May 27 03:21:20.885421 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] May 27 03:21:20.885489 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] May 27 03:21:20.885551 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] May 27 03:21:20.885562 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 27 03:21:20.885570 kernel: PCI: CLS 0 bytes, default 64 May 27 03:21:20.885578 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f7ed49df2, max_idle_ns: 440795247253 ns May 27 03:21:20.885587 kernel: Initialise system trusted keyrings May 27 03:21:20.885595 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 27 03:21:20.885602 kernel: Key type asymmetric registered May 27 03:21:20.885614 kernel: Asymmetric key parser 'x509' registered May 27 03:21:20.885625 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 27 03:21:20.885634 kernel: io scheduler mq-deadline registered May 27 03:21:20.885642 kernel: io scheduler kyber registered May 27 03:21:20.885649 kernel: io scheduler bfq registered May 27 03:21:20.885724 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 May 27 03:21:20.885825 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 May 27 03:21:20.885916 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 May 27 03:21:20.886025 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 May 27 03:21:20.886124 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 May 27 03:21:20.886216 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 May 27 03:21:20.886367 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 May 27 03:21:20.886472 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 May 27 03:21:20.886575 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 May 27 03:21:20.886667 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 May 27 03:21:20.886755 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 May 27 03:21:20.886829 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 May 27 03:21:20.886899 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 May 27 03:21:20.886967 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 May 27 03:21:20.887034 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 May 27 03:21:20.887101 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 May 27 03:21:20.887115 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 27 03:21:20.887194 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 May 27 03:21:20.887336 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 May 27 03:21:20.887354 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 27 03:21:20.887362 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 May 27 03:21:20.887370 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 03:21:20.887378 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 27 03:21:20.887386 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 27 03:21:20.887394 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 27 03:21:20.887401 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 27 03:21:20.887479 kernel: rtc_cmos 00:03: RTC can wake from S4 May 27 03:21:20.887493 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 27 03:21:20.887555 kernel: rtc_cmos 00:03: registered as rtc0 May 27 03:21:20.887625 kernel: rtc_cmos 00:03: setting system clock to 2025-05-27T03:21:20 UTC (1748316080) May 27 03:21:20.887710 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 27 03:21:20.887722 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 27 03:21:20.887731 kernel: NET: Registered PF_INET6 protocol family May 27 03:21:20.887739 kernel: Segment Routing with IPv6 May 27 03:21:20.887757 kernel: In-situ OAM (IOAM) with IPv6 May 27 03:21:20.887770 kernel: NET: Registered PF_PACKET protocol family May 27 03:21:20.887778 kernel: Key type dns_resolver registered May 27 03:21:20.887786 kernel: IPI shorthand broadcast: enabled May 27 03:21:20.887794 kernel: sched_clock: Marking stable (3472007993, 169567684)->(3651905923, -10330246) May 27 03:21:20.887801 kernel: registered taskstats version 1 May 27 03:21:20.887809 kernel: Loading compiled-in X.509 certificates May 27 03:21:20.887817 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: ba9eddccb334a70147f3ddfe4fbde029feaa991d' May 27 03:21:20.887825 kernel: Demotion targets for Node 0: null May 27 03:21:20.887835 kernel: Key type .fscrypt registered May 27 03:21:20.887842 kernel: Key type fscrypt-provisioning registered May 27 03:21:20.887850 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 03:21:20.887858 kernel: ima: Allocated hash algorithm: sha1 May 27 03:21:20.887865 kernel: ima: No architecture policies found May 27 03:21:20.887873 kernel: clk: Disabling unused clocks May 27 03:21:20.887880 kernel: Warning: unable to open an initial console. May 27 03:21:20.887888 kernel: Freeing unused kernel image (initmem) memory: 54416K May 27 03:21:20.887896 kernel: Write protecting the kernel read-only data: 24576k May 27 03:21:20.887905 kernel: Freeing unused kernel image (rodata/data gap) memory: 288K May 27 03:21:20.887912 kernel: Run /init as init process May 27 03:21:20.887920 kernel: with arguments: May 27 03:21:20.887928 kernel: /init May 27 03:21:20.887935 kernel: with environment: May 27 03:21:20.887943 kernel: HOME=/ May 27 03:21:20.887950 kernel: TERM=linux May 27 03:21:20.887958 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 03:21:20.887967 systemd[1]: Successfully made /usr/ read-only. May 27 03:21:20.887979 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:21:20.887988 systemd[1]: Detected virtualization kvm. May 27 03:21:20.887996 systemd[1]: Detected architecture x86-64. May 27 03:21:20.888004 systemd[1]: Running in initrd. May 27 03:21:20.888012 systemd[1]: No hostname configured, using default hostname. May 27 03:21:20.888020 systemd[1]: Hostname set to . May 27 03:21:20.888028 systemd[1]: Initializing machine ID from VM UUID. May 27 03:21:20.888037 systemd[1]: Queued start job for default target initrd.target. May 27 03:21:20.888047 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:21:20.888064 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:21:20.888079 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 03:21:20.888090 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:21:20.888099 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 03:21:20.888108 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 03:21:20.888121 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 03:21:20.888130 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 03:21:20.888138 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:21:20.888147 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:21:20.888156 systemd[1]: Reached target paths.target - Path Units. May 27 03:21:20.888165 systemd[1]: Reached target slices.target - Slice Units. May 27 03:21:20.888176 systemd[1]: Reached target swap.target - Swaps. May 27 03:21:20.888186 systemd[1]: Reached target timers.target - Timer Units. May 27 03:21:20.888196 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:21:20.888204 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:21:20.888213 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 03:21:20.888221 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 03:21:20.888241 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:21:20.888250 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:21:20.888258 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:21:20.888266 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:21:20.888274 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 03:21:20.888284 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:21:20.888292 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 03:21:20.888320 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 03:21:20.888329 systemd[1]: Starting systemd-fsck-usr.service... May 27 03:21:20.888338 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:21:20.888346 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:21:20.888354 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:21:20.888362 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 03:21:20.888372 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:21:20.888381 systemd[1]: Finished systemd-fsck-usr.service. May 27 03:21:20.888411 systemd-journald[215]: Collecting audit messages is disabled. May 27 03:21:20.888435 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 03:21:20.888443 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 03:21:20.888451 kernel: Bridge firewalling registered May 27 03:21:20.888459 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:21:20.888468 systemd-journald[215]: Journal started May 27 03:21:20.888489 systemd-journald[215]: Runtime Journal (/run/log/journal/fa4b2c8e4d494d58ab310ace5e8dc9ac) is 4.8M, max 38.6M, 33.7M free. May 27 03:21:20.843474 systemd-modules-load[217]: Inserted module 'overlay' May 27 03:21:20.924281 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:21:20.884383 systemd-modules-load[217]: Inserted module 'br_netfilter' May 27 03:21:20.924921 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:21:20.925759 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:21:20.928570 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 03:21:20.931954 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:21:20.937937 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:21:20.940419 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:21:20.946390 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:21:20.959636 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:21:20.966470 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 03:21:20.968598 systemd-tmpfiles[234]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 03:21:20.969713 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:21:20.973567 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:21:20.980412 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:21:20.987087 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:21:21.014825 systemd-resolved[255]: Positive Trust Anchors: May 27 03:21:21.015480 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:21:21.015511 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:21:21.021199 systemd-resolved[255]: Defaulting to hostname 'linux'. May 27 03:21:21.022127 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:21:21.022708 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:21:21.065381 kernel: SCSI subsystem initialized May 27 03:21:21.077366 kernel: Loading iSCSI transport class v2.0-870. May 27 03:21:21.089335 kernel: iscsi: registered transport (tcp) May 27 03:21:21.114396 kernel: iscsi: registered transport (qla4xxx) May 27 03:21:21.114487 kernel: QLogic iSCSI HBA Driver May 27 03:21:21.137581 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:21:21.153247 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:21:21.156615 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:21:21.207426 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 03:21:21.209516 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 03:21:21.273383 kernel: raid6: avx2x4 gen() 26915 MB/s May 27 03:21:21.291402 kernel: raid6: avx2x2 gen() 29021 MB/s May 27 03:21:21.308547 kernel: raid6: avx2x1 gen() 25445 MB/s May 27 03:21:21.308631 kernel: raid6: using algorithm avx2x2 gen() 29021 MB/s May 27 03:21:21.327423 kernel: raid6: .... xor() 19691 MB/s, rmw enabled May 27 03:21:21.327516 kernel: raid6: using avx2x2 recovery algorithm May 27 03:21:21.346372 kernel: xor: automatically using best checksumming function avx May 27 03:21:21.506342 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 03:21:21.511940 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 03:21:21.513763 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:21:21.536832 systemd-udevd[464]: Using default interface naming scheme 'v255'. May 27 03:21:21.541048 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:21:21.544062 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 03:21:21.565549 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation May 27 03:21:21.587364 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:21:21.589076 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:21:21.641551 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:21:21.648581 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 03:21:21.754341 kernel: cryptd: max_cpu_qlen set to 1000 May 27 03:21:21.757354 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues May 27 03:21:21.778342 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 27 03:21:21.788465 kernel: ACPI: bus type USB registered May 27 03:21:21.788518 kernel: usbcore: registered new interface driver usbfs May 27 03:21:21.789645 kernel: usbcore: registered new interface driver hub May 27 03:21:21.791881 kernel: usbcore: registered new device driver usb May 27 03:21:21.793032 kernel: scsi host0: Virtio SCSI HBA May 27 03:21:21.804398 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:21:21.805203 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:21:21.807316 kernel: libata version 3.00 loaded. May 27 03:21:21.808428 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:21:21.811980 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:21:21.838075 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 27 03:21:21.838146 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 27 03:21:21.838538 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 May 27 03:21:21.838816 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 May 27 03:21:21.839113 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 27 03:21:21.839417 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 May 27 03:21:21.839671 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed May 27 03:21:21.839871 kernel: hub 1-0:1.0: USB hub found May 27 03:21:21.839989 kernel: hub 1-0:1.0: 4 ports detected May 27 03:21:21.840113 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 27 03:21:21.840235 kernel: hub 2-0:1.0: USB hub found May 27 03:21:21.840370 kernel: hub 2-0:1.0: 4 ports detected May 27 03:21:21.838466 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 03:21:21.859333 kernel: AES CTR mode by8 optimization enabled May 27 03:21:21.862349 kernel: ahci 0000:00:1f.2: version 3.0 May 27 03:21:21.862518 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 27 03:21:21.865856 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 27 03:21:21.865983 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 27 03:21:21.866068 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 27 03:21:21.867321 kernel: sd 0:0:0:0: Power-on or device reset occurred May 27 03:21:21.867468 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) May 27 03:21:21.867555 kernel: sd 0:0:0:0: [sda] Write Protect is off May 27 03:21:21.867640 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 May 27 03:21:21.867722 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 27 03:21:21.876352 kernel: scsi host1: ahci May 27 03:21:21.876875 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 03:21:21.876887 kernel: GPT:17805311 != 80003071 May 27 03:21:21.876896 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 03:21:21.876905 kernel: GPT:17805311 != 80003071 May 27 03:21:21.876914 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 03:21:21.876926 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 03:21:21.877325 kernel: scsi host2: ahci May 27 03:21:21.877425 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 27 03:21:21.883373 kernel: scsi host3: ahci May 27 03:21:21.884368 kernel: scsi host4: ahci May 27 03:21:21.884470 kernel: scsi host5: ahci May 27 03:21:21.886331 kernel: scsi host6: ahci May 27 03:21:21.886436 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 51 lpm-pol 0 May 27 03:21:21.886446 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 51 lpm-pol 0 May 27 03:21:21.886455 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 51 lpm-pol 0 May 27 03:21:21.886464 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 51 lpm-pol 0 May 27 03:21:21.886473 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 51 lpm-pol 0 May 27 03:21:21.886482 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 51 lpm-pol 0 May 27 03:21:21.951020 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:21:21.962091 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 27 03:21:21.972489 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 27 03:21:21.979827 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 27 03:21:21.980470 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 27 03:21:21.993296 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 27 03:21:21.995376 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 03:21:22.022002 disk-uuid[630]: Primary Header is updated. May 27 03:21:22.022002 disk-uuid[630]: Secondary Entries is updated. May 27 03:21:22.022002 disk-uuid[630]: Secondary Header is updated. May 27 03:21:22.036373 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 03:21:22.061379 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 03:21:22.070656 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd May 27 03:21:22.192354 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 27 03:21:22.196108 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 27 03:21:22.196138 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 27 03:21:22.196148 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 27 03:21:22.197832 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 27 03:21:22.199018 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 27 03:21:22.202092 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 27 03:21:22.202122 kernel: ata1.00: applying bridge limits May 27 03:21:22.203151 kernel: ata1.00: configured for UDMA/100 May 27 03:21:22.207326 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 27 03:21:22.229330 kernel: hid: raw HID events driver (C) Jiri Kosina May 27 03:21:22.236705 kernel: usbcore: registered new interface driver usbhid May 27 03:21:22.236729 kernel: usbhid: USB HID core driver May 27 03:21:22.241328 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input4 May 27 03:21:22.244966 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 May 27 03:21:22.245099 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 27 03:21:22.245199 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 27 03:21:22.254337 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 May 27 03:21:22.500671 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 03:21:22.504066 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:21:22.506179 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:21:22.507498 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:21:22.511520 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 03:21:22.553808 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 03:21:23.062790 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 03:21:23.062877 disk-uuid[631]: The operation has completed successfully. May 27 03:21:23.138058 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 03:21:23.138211 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 03:21:23.171000 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 03:21:23.186937 sh[664]: Success May 27 03:21:23.222264 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 03:21:23.222349 kernel: device-mapper: uevent: version 1.0.3 May 27 03:21:23.226339 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 03:21:23.238365 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 27 03:21:23.286038 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 03:21:23.290415 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 03:21:23.302849 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 03:21:23.319361 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 03:21:23.319430 kernel: BTRFS: device fsid f0f66fe8-3990-49eb-980e-559a3dfd3522 devid 1 transid 40 /dev/mapper/usr (254:0) scanned by mount (676) May 27 03:21:23.324908 kernel: BTRFS info (device dm-0): first mount of filesystem f0f66fe8-3990-49eb-980e-559a3dfd3522 May 27 03:21:23.324978 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 27 03:21:23.326678 kernel: BTRFS info (device dm-0): using free-space-tree May 27 03:21:23.338744 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 03:21:23.340365 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 03:21:23.342195 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 03:21:23.344511 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 03:21:23.346380 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 03:21:23.384355 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (717) May 27 03:21:23.388339 kernel: BTRFS info (device sda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:21:23.388404 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:21:23.388427 kernel: BTRFS info (device sda6): using free-space-tree May 27 03:21:23.404370 kernel: BTRFS info (device sda6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:21:23.405646 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 03:21:23.406933 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 03:21:23.451812 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:21:23.454251 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:21:23.504773 systemd-networkd[846]: lo: Link UP May 27 03:21:23.505373 systemd-networkd[846]: lo: Gained carrier May 27 03:21:23.508419 systemd-networkd[846]: Enumeration completed May 27 03:21:23.508500 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:21:23.509063 systemd[1]: Reached target network.target - Network. May 27 03:21:23.510852 systemd-networkd[846]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:21:23.510856 systemd-networkd[846]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:21:23.511953 systemd-networkd[846]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:21:23.511959 systemd-networkd[846]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:21:23.512765 systemd-networkd[846]: eth0: Link UP May 27 03:21:23.512769 systemd-networkd[846]: eth0: Gained carrier May 27 03:21:23.512777 systemd-networkd[846]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:21:23.517977 systemd-networkd[846]: eth1: Link UP May 27 03:21:23.517980 systemd-networkd[846]: eth1: Gained carrier May 27 03:21:23.517992 systemd-networkd[846]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:21:23.544543 ignition[792]: Ignition 2.21.0 May 27 03:21:23.545297 ignition[792]: Stage: fetch-offline May 27 03:21:23.545755 ignition[792]: no configs at "/usr/lib/ignition/base.d" May 27 03:21:23.546210 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 27 03:21:23.546893 ignition[792]: parsed url from cmdline: "" May 27 03:21:23.546930 ignition[792]: no config URL provided May 27 03:21:23.547366 ignition[792]: reading system config file "/usr/lib/ignition/user.ign" May 27 03:21:23.547374 ignition[792]: no config at "/usr/lib/ignition/user.ign" May 27 03:21:23.549372 systemd-networkd[846]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 03:21:23.547378 ignition[792]: failed to fetch config: resource requires networking May 27 03:21:23.547520 ignition[792]: Ignition finished successfully May 27 03:21:23.550660 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:21:23.553400 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 27 03:21:23.573413 systemd-networkd[846]: eth0: DHCPv4 address 157.180.65.55/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 27 03:21:23.584899 ignition[855]: Ignition 2.21.0 May 27 03:21:23.584920 ignition[855]: Stage: fetch May 27 03:21:23.585103 ignition[855]: no configs at "/usr/lib/ignition/base.d" May 27 03:21:23.585115 ignition[855]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 27 03:21:23.585204 ignition[855]: parsed url from cmdline: "" May 27 03:21:23.585208 ignition[855]: no config URL provided May 27 03:21:23.585215 ignition[855]: reading system config file "/usr/lib/ignition/user.ign" May 27 03:21:23.585238 ignition[855]: no config at "/usr/lib/ignition/user.ign" May 27 03:21:23.585278 ignition[855]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 May 27 03:21:23.589868 ignition[855]: GET result: OK May 27 03:21:23.589983 ignition[855]: parsing config with SHA512: 6976a5738f582b62864b283f2bb345fc53d8680d6c54c9ada443b2d3bf162e9379b09751802ad959fa33a89306100183a25f0f4eb0c568934f3b58a4881225a9 May 27 03:21:23.596783 unknown[855]: fetched base config from "system" May 27 03:21:23.596804 unknown[855]: fetched base config from "system" May 27 03:21:23.596814 unknown[855]: fetched user config from "hetzner" May 27 03:21:23.597404 ignition[855]: fetch: fetch complete May 27 03:21:23.599572 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 27 03:21:23.597414 ignition[855]: fetch: fetch passed May 27 03:21:23.601083 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 03:21:23.597491 ignition[855]: Ignition finished successfully May 27 03:21:23.637171 ignition[861]: Ignition 2.21.0 May 27 03:21:23.637191 ignition[861]: Stage: kargs May 27 03:21:23.637754 ignition[861]: no configs at "/usr/lib/ignition/base.d" May 27 03:21:23.637767 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 27 03:21:23.639638 ignition[861]: kargs: kargs passed May 27 03:21:23.639697 ignition[861]: Ignition finished successfully May 27 03:21:23.641870 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 03:21:23.644787 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 03:21:23.666134 ignition[868]: Ignition 2.21.0 May 27 03:21:23.666149 ignition[868]: Stage: disks May 27 03:21:23.666332 ignition[868]: no configs at "/usr/lib/ignition/base.d" May 27 03:21:23.666341 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 27 03:21:23.670186 ignition[868]: disks: disks passed May 27 03:21:23.670253 ignition[868]: Ignition finished successfully May 27 03:21:23.672004 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 03:21:23.673770 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 03:21:23.675160 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 03:21:23.676103 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:21:23.677828 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:21:23.679133 systemd[1]: Reached target basic.target - Basic System. May 27 03:21:23.681823 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 03:21:23.707829 systemd-fsck[877]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks May 27 03:21:23.710617 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 03:21:23.712426 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 03:21:23.842340 kernel: EXT4-fs (sda9): mounted filesystem 18301365-b380-45d7-9677-e42472a122bc r/w with ordered data mode. Quota mode: none. May 27 03:21:23.843644 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 03:21:23.844522 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 03:21:23.847096 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:21:23.850361 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 03:21:23.853407 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 27 03:21:23.856853 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 03:21:23.856908 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:21:23.864451 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 03:21:23.871657 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 03:21:23.879343 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (885) May 27 03:21:23.896068 kernel: BTRFS info (device sda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:21:23.896138 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:21:23.896160 kernel: BTRFS info (device sda6): using free-space-tree May 27 03:21:23.917581 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:21:23.929583 coreos-metadata[887]: May 27 03:21:23.929 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 May 27 03:21:23.931987 coreos-metadata[887]: May 27 03:21:23.931 INFO Fetch successful May 27 03:21:23.933723 coreos-metadata[887]: May 27 03:21:23.933 INFO wrote hostname ci-4344-0-0-e-876c439243 to /sysroot/etc/hostname May 27 03:21:23.936393 initrd-setup-root[912]: cut: /sysroot/etc/passwd: No such file or directory May 27 03:21:23.939440 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 27 03:21:23.942377 initrd-setup-root[920]: cut: /sysroot/etc/group: No such file or directory May 27 03:21:23.946394 initrd-setup-root[927]: cut: /sysroot/etc/shadow: No such file or directory May 27 03:21:23.950059 initrd-setup-root[934]: cut: /sysroot/etc/gshadow: No such file or directory May 27 03:21:24.036686 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 03:21:24.038763 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 03:21:24.042638 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 03:21:24.068351 kernel: BTRFS info (device sda6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:21:24.090118 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 03:21:24.101572 ignition[1006]: INFO : Ignition 2.21.0 May 27 03:21:24.101572 ignition[1006]: INFO : Stage: mount May 27 03:21:24.103388 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:21:24.103388 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 27 03:21:24.105585 ignition[1006]: INFO : mount: mount passed May 27 03:21:24.105585 ignition[1006]: INFO : Ignition finished successfully May 27 03:21:24.107731 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 03:21:24.110505 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 03:21:24.318248 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 03:21:24.320492 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:21:24.355389 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (1017) May 27 03:21:24.362152 kernel: BTRFS info (device sda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:21:24.362212 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:21:24.365806 kernel: BTRFS info (device sda6): using free-space-tree May 27 03:21:24.376031 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:21:24.412872 ignition[1033]: INFO : Ignition 2.21.0 May 27 03:21:24.412872 ignition[1033]: INFO : Stage: files May 27 03:21:24.415412 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:21:24.415412 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 27 03:21:24.415412 ignition[1033]: DEBUG : files: compiled without relabeling support, skipping May 27 03:21:24.417738 ignition[1033]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 03:21:24.417738 ignition[1033]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 03:21:24.420800 ignition[1033]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 03:21:24.421797 ignition[1033]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 03:21:24.421797 ignition[1033]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 03:21:24.421426 unknown[1033]: wrote ssh authorized keys file for user: core May 27 03:21:24.425041 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 27 03:21:24.426239 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 27 03:21:24.969594 systemd-networkd[846]: eth1: Gained IPv6LL May 27 03:21:25.097612 systemd-networkd[846]: eth0: Gained IPv6LL May 27 03:21:25.672414 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 03:21:28.503905 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 27 03:21:28.503905 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 27 03:21:28.510055 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 27 03:21:28.510055 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 03:21:28.510055 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 03:21:28.510055 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:21:28.510055 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:21:28.510055 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:21:28.510055 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:21:28.510055 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:21:28.510055 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:21:28.510055 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 27 03:21:28.527467 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 27 03:21:28.527467 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 27 03:21:28.527467 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 27 03:21:29.404719 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 27 03:21:32.805332 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 27 03:21:32.805332 ignition[1033]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 27 03:21:32.810189 ignition[1033]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:21:32.811955 ignition[1033]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:21:32.811955 ignition[1033]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 27 03:21:32.811955 ignition[1033]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 27 03:21:32.811955 ignition[1033]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 27 03:21:32.811955 ignition[1033]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 27 03:21:32.811955 ignition[1033]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 27 03:21:32.811955 ignition[1033]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" May 27 03:21:32.811955 ignition[1033]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" May 27 03:21:32.811955 ignition[1033]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 03:21:32.811955 ignition[1033]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 03:21:32.811955 ignition[1033]: INFO : files: files passed May 27 03:21:32.811955 ignition[1033]: INFO : Ignition finished successfully May 27 03:21:32.812701 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 03:21:32.817431 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 03:21:32.829169 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 03:21:32.834564 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 03:21:32.834730 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 03:21:32.844298 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 03:21:32.844298 initrd-setup-root-after-ignition[1064]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 03:21:32.846652 initrd-setup-root-after-ignition[1068]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 03:21:32.848747 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:21:32.851144 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 03:21:32.854057 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 03:21:32.918146 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 03:21:32.918291 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 03:21:32.920610 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 03:21:32.922171 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 03:21:32.924180 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 03:21:32.925284 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 03:21:32.951931 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:21:32.954938 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 03:21:32.983845 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 03:21:32.985509 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:21:32.987664 systemd[1]: Stopped target timers.target - Timer Units. May 27 03:21:32.989789 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 03:21:32.990053 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:21:32.992165 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 03:21:32.993660 systemd[1]: Stopped target basic.target - Basic System. May 27 03:21:32.995777 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 03:21:32.998147 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:21:33.000788 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 03:21:33.003217 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 03:21:33.005940 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 03:21:33.008692 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:21:33.011213 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 03:21:33.013777 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 03:21:33.016427 systemd[1]: Stopped target swap.target - Swaps. May 27 03:21:33.018803 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 03:21:33.019066 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 03:21:33.021893 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 03:21:33.023642 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:21:33.026368 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 03:21:33.026893 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:21:33.028732 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 03:21:33.029051 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 03:21:33.032158 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 03:21:33.032499 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:21:33.034987 systemd[1]: ignition-files.service: Deactivated successfully. May 27 03:21:33.035258 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 03:21:33.037513 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 27 03:21:33.037758 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 27 03:21:33.042666 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 03:21:33.056589 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 03:21:33.059581 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 03:21:33.059893 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:21:33.063984 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 03:21:33.064259 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:21:33.080864 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 03:21:33.082032 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 03:21:33.084613 ignition[1088]: INFO : Ignition 2.21.0 May 27 03:21:33.084613 ignition[1088]: INFO : Stage: umount May 27 03:21:33.084613 ignition[1088]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:21:33.084613 ignition[1088]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 27 03:21:33.099486 ignition[1088]: INFO : umount: umount passed May 27 03:21:33.099486 ignition[1088]: INFO : Ignition finished successfully May 27 03:21:33.092177 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 03:21:33.092402 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 03:21:33.097250 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 03:21:33.097388 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 03:21:33.099033 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 03:21:33.099089 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 03:21:33.102573 systemd[1]: ignition-fetch.service: Deactivated successfully. May 27 03:21:33.102629 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 27 03:21:33.103498 systemd[1]: Stopped target network.target - Network. May 27 03:21:33.104807 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 03:21:33.104895 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:21:33.106143 systemd[1]: Stopped target paths.target - Path Units. May 27 03:21:33.107294 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 03:21:33.111373 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:21:33.112907 systemd[1]: Stopped target slices.target - Slice Units. May 27 03:21:33.114525 systemd[1]: Stopped target sockets.target - Socket Units. May 27 03:21:33.115954 systemd[1]: iscsid.socket: Deactivated successfully. May 27 03:21:33.116011 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:21:33.117746 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 03:21:33.117801 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:21:33.119117 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 03:21:33.119199 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 03:21:33.120944 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 03:21:33.121022 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 03:21:33.122615 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 03:21:33.124175 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 03:21:33.127971 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 03:21:33.129760 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 03:21:33.130344 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 03:21:33.132157 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 03:21:33.132510 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 03:21:33.137917 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 03:21:33.138203 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 03:21:33.138362 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 03:21:33.141462 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 03:21:33.142702 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 03:21:33.145144 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 03:21:33.145197 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 03:21:33.146853 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 03:21:33.146916 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 03:21:33.149731 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 03:21:33.151728 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 03:21:33.151790 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:21:33.155528 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 03:21:33.155585 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 03:21:33.158425 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 03:21:33.158477 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 03:21:33.159652 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 03:21:33.159704 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:21:33.164529 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:21:33.167117 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 03:21:33.167197 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 03:21:33.172935 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 03:21:33.173902 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:21:33.175108 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 03:21:33.175153 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 03:21:33.177655 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 03:21:33.177712 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:21:33.179428 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 03:21:33.179487 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 03:21:33.180421 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 03:21:33.180470 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 03:21:33.182053 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 03:21:33.182126 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:21:33.184901 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 03:21:33.187199 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 03:21:33.187294 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:21:33.191056 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 03:21:33.191115 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:21:33.194076 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 27 03:21:33.194132 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:21:33.196127 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 03:21:33.196180 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:21:33.197444 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:21:33.197502 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:21:33.206623 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 27 03:21:33.206693 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 27 03:21:33.206782 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 27 03:21:33.206851 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 03:21:33.207561 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 03:21:33.207676 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 03:21:33.210659 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 03:21:33.210798 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 03:21:33.213369 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 03:21:33.215713 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 03:21:33.238165 systemd[1]: Switching root. May 27 03:21:33.286265 systemd-journald[215]: Journal stopped May 27 03:21:34.346573 systemd-journald[215]: Received SIGTERM from PID 1 (systemd). May 27 03:21:34.346625 kernel: SELinux: policy capability network_peer_controls=1 May 27 03:21:34.346641 kernel: SELinux: policy capability open_perms=1 May 27 03:21:34.346656 kernel: SELinux: policy capability extended_socket_class=1 May 27 03:21:34.346667 kernel: SELinux: policy capability always_check_network=0 May 27 03:21:34.346679 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 03:21:34.346689 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 03:21:34.346698 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 03:21:34.346707 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 03:21:34.346716 kernel: SELinux: policy capability userspace_initial_context=0 May 27 03:21:34.346725 kernel: audit: type=1403 audit(1748316093.471:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 03:21:34.346741 systemd[1]: Successfully loaded SELinux policy in 51.807ms. May 27 03:21:34.346757 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.195ms. May 27 03:21:34.346769 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:21:34.346780 systemd[1]: Detected virtualization kvm. May 27 03:21:34.346789 systemd[1]: Detected architecture x86-64. May 27 03:21:34.346799 systemd[1]: Detected first boot. May 27 03:21:34.346809 systemd[1]: Hostname set to . May 27 03:21:34.346819 systemd[1]: Initializing machine ID from VM UUID. May 27 03:21:34.346829 zram_generator::config[1131]: No configuration found. May 27 03:21:34.346841 kernel: Guest personality initialized and is inactive May 27 03:21:34.346851 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 27 03:21:34.346860 kernel: Initialized host personality May 27 03:21:34.346869 kernel: NET: Registered PF_VSOCK protocol family May 27 03:21:34.346878 systemd[1]: Populated /etc with preset unit settings. May 27 03:21:34.346888 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 03:21:34.346898 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 03:21:34.351016 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 03:21:34.351034 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 03:21:34.351045 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 03:21:34.351058 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 03:21:34.351068 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 03:21:34.351078 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 03:21:34.351088 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 03:21:34.351098 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 03:21:34.351108 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 03:21:34.351120 systemd[1]: Created slice user.slice - User and Session Slice. May 27 03:21:34.351131 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:21:34.351142 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:21:34.351152 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 03:21:34.351162 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 03:21:34.351173 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 03:21:34.351183 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:21:34.351193 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 03:21:34.351203 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:21:34.351213 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:21:34.351233 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 03:21:34.351243 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 03:21:34.351253 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 03:21:34.351262 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 03:21:34.351274 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:21:34.351284 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:21:34.351294 systemd[1]: Reached target slices.target - Slice Units. May 27 03:21:34.351318 systemd[1]: Reached target swap.target - Swaps. May 27 03:21:34.351328 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 03:21:34.351337 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 03:21:34.351347 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 03:21:34.351357 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:21:34.351367 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:21:34.351384 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:21:34.351396 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 03:21:34.351406 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 03:21:34.351416 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 03:21:34.351426 systemd[1]: Mounting media.mount - External Media Directory... May 27 03:21:34.351436 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:21:34.351446 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 03:21:34.351455 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 03:21:34.351465 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 03:21:34.351477 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 03:21:34.351486 systemd[1]: Reached target machines.target - Containers. May 27 03:21:34.351496 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 03:21:34.351506 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:21:34.351516 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:21:34.351526 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 03:21:34.351537 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:21:34.351547 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:21:34.351557 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:21:34.351568 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 03:21:34.351577 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:21:34.351587 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 03:21:34.351598 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 03:21:34.351608 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 03:21:34.351618 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 03:21:34.351628 systemd[1]: Stopped systemd-fsck-usr.service. May 27 03:21:34.351638 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:21:34.351649 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:21:34.351660 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:21:34.351670 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:21:34.351680 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 03:21:34.351690 kernel: ACPI: bus type drm_connector registered May 27 03:21:34.351701 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 03:21:34.351711 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:21:34.351721 systemd[1]: verity-setup.service: Deactivated successfully. May 27 03:21:34.351731 kernel: fuse: init (API version 7.41) May 27 03:21:34.351741 systemd[1]: Stopped verity-setup.service. May 27 03:21:34.351752 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:21:34.351762 kernel: loop: module loaded May 27 03:21:34.351772 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 03:21:34.351783 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 03:21:34.351793 systemd[1]: Mounted media.mount - External Media Directory. May 27 03:21:34.351803 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 03:21:34.351814 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 03:21:34.351823 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 03:21:34.351834 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 03:21:34.351845 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:21:34.351854 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 03:21:34.351864 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 03:21:34.351874 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:21:34.351883 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:21:34.351893 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:21:34.351903 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:21:34.351913 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:21:34.351924 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:21:34.351933 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 03:21:34.351963 systemd-journald[1212]: Collecting audit messages is disabled. May 27 03:21:34.351985 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 03:21:34.351995 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:21:34.352005 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:21:34.352016 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:21:34.352027 systemd-journald[1212]: Journal started May 27 03:21:34.352048 systemd-journald[1212]: Runtime Journal (/run/log/journal/fa4b2c8e4d494d58ab310ace5e8dc9ac) is 4.8M, max 38.6M, 33.7M free. May 27 03:21:34.014751 systemd[1]: Queued start job for default target multi-user.target. May 27 03:21:34.026263 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 27 03:21:34.026772 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 03:21:34.355326 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:21:34.355743 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:21:34.356569 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 03:21:34.357373 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 03:21:34.366526 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:21:34.368441 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 03:21:34.373428 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 03:21:34.373936 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 03:21:34.373965 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:21:34.377744 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 03:21:34.385390 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 03:21:34.385971 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:21:34.387758 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 03:21:34.388840 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 03:21:34.389360 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:21:34.391039 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 03:21:34.392608 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:21:34.394429 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:21:34.396494 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 03:21:34.400617 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 03:21:34.402724 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:21:34.403317 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 03:21:34.404282 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 03:21:34.417892 systemd-journald[1212]: Time spent on flushing to /var/log/journal/fa4b2c8e4d494d58ab310ace5e8dc9ac is 60.002ms for 1165 entries. May 27 03:21:34.417892 systemd-journald[1212]: System Journal (/var/log/journal/fa4b2c8e4d494d58ab310ace5e8dc9ac) is 8M, max 584.8M, 576.8M free. May 27 03:21:34.499465 systemd-journald[1212]: Received client request to flush runtime journal. May 27 03:21:34.499503 kernel: loop0: detected capacity change from 0 to 113872 May 27 03:21:34.499519 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 03:21:34.420660 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 03:21:34.421882 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 03:21:34.424208 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 03:21:34.464563 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:21:34.465707 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. May 27 03:21:34.465719 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. May 27 03:21:34.475148 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:21:34.483766 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 03:21:34.498323 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 03:21:34.503469 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 03:21:34.518367 kernel: loop1: detected capacity change from 0 to 221472 May 27 03:21:34.537619 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 03:21:34.542137 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:21:34.564486 kernel: loop2: detected capacity change from 0 to 8 May 27 03:21:34.580881 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. May 27 03:21:34.582331 kernel: loop3: detected capacity change from 0 to 146240 May 27 03:21:34.581242 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. May 27 03:21:34.586007 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:21:34.640344 kernel: loop4: detected capacity change from 0 to 113872 May 27 03:21:34.666745 kernel: loop5: detected capacity change from 0 to 221472 May 27 03:21:34.697390 kernel: loop6: detected capacity change from 0 to 8 May 27 03:21:34.701317 kernel: loop7: detected capacity change from 0 to 146240 May 27 03:21:34.735312 (sd-merge)[1282]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. May 27 03:21:34.736148 (sd-merge)[1282]: Merged extensions into '/usr'. May 27 03:21:34.743198 systemd[1]: Reload requested from client PID 1256 ('systemd-sysext') (unit systemd-sysext.service)... May 27 03:21:34.743337 systemd[1]: Reloading... May 27 03:21:34.818351 zram_generator::config[1306]: No configuration found. May 27 03:21:34.917396 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:21:34.965112 ldconfig[1251]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 03:21:35.003740 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 03:21:35.004332 systemd[1]: Reloading finished in 259 ms. May 27 03:21:35.022231 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 03:21:35.023480 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 03:21:35.035419 systemd[1]: Starting ensure-sysext.service... May 27 03:21:35.037534 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:21:35.067019 systemd[1]: Reload requested from client PID 1351 ('systemctl') (unit ensure-sysext.service)... May 27 03:21:35.067151 systemd[1]: Reloading... May 27 03:21:35.067463 systemd-tmpfiles[1352]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 03:21:35.067492 systemd-tmpfiles[1352]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 03:21:35.067956 systemd-tmpfiles[1352]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 03:21:35.070472 systemd-tmpfiles[1352]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 03:21:35.071097 systemd-tmpfiles[1352]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 03:21:35.071372 systemd-tmpfiles[1352]: ACLs are not supported, ignoring. May 27 03:21:35.071415 systemd-tmpfiles[1352]: ACLs are not supported, ignoring. May 27 03:21:35.080106 systemd-tmpfiles[1352]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:21:35.080121 systemd-tmpfiles[1352]: Skipping /boot May 27 03:21:35.101771 systemd-tmpfiles[1352]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:21:35.102549 systemd-tmpfiles[1352]: Skipping /boot May 27 03:21:35.156346 zram_generator::config[1379]: No configuration found. May 27 03:21:35.242722 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:21:35.325642 systemd[1]: Reloading finished in 258 ms. May 27 03:21:35.351513 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 03:21:35.352396 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:21:35.361794 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:21:35.369510 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 03:21:35.373065 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 03:21:35.383867 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:21:35.390268 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:21:35.402101 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 03:21:35.410830 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:21:35.410998 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:21:35.415533 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:21:35.418748 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:21:35.432187 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:21:35.434606 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:21:35.434827 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:21:35.441816 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 03:21:35.445377 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:21:35.446955 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 03:21:35.455648 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 03:21:35.459746 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:21:35.459944 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:21:35.460155 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:21:35.460294 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:21:35.461444 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:21:35.466426 augenrules[1455]: No rules May 27 03:21:35.469397 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:21:35.469621 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:21:35.471619 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:21:35.472093 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:21:35.474574 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:21:35.476501 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:21:35.489346 systemd-udevd[1434]: Using default interface naming scheme 'v255'. May 27 03:21:35.489474 systemd[1]: Finished ensure-sysext.service. May 27 03:21:35.494905 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 03:21:35.499011 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:21:35.499213 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:21:35.501299 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 03:21:35.503988 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:21:35.504139 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:21:35.505165 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:21:35.507287 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:21:35.507347 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:21:35.507404 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:21:35.507452 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:21:35.509475 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 27 03:21:35.510023 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:21:35.510396 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 03:21:35.511957 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 03:21:35.525125 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:21:35.525348 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:21:35.534812 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:21:35.538079 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:21:35.540899 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 03:21:35.618923 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 03:21:35.749698 kernel: mousedev: PS/2 mouse device common for all mice May 27 03:21:35.754979 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 27 03:21:35.757007 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 03:21:35.777917 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 03:21:35.786338 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input5 May 27 03:21:35.800661 kernel: ACPI: button: Power Button [PWRF] May 27 03:21:35.823824 systemd-networkd[1472]: lo: Link UP May 27 03:21:35.823833 systemd-networkd[1472]: lo: Gained carrier May 27 03:21:35.837010 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. May 27 03:21:35.837064 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:21:35.837151 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:21:35.838985 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:21:35.839241 systemd-timesyncd[1468]: No network connectivity, watching for changes. May 27 03:21:35.841571 systemd-networkd[1472]: Enumeration completed May 27 03:21:35.842279 systemd-networkd[1472]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:21:35.842282 systemd-networkd[1472]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:21:35.843012 systemd-networkd[1472]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:21:35.843019 systemd-networkd[1472]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:21:35.843791 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:21:35.844534 systemd-networkd[1472]: eth0: Link UP May 27 03:21:35.844654 systemd-networkd[1472]: eth0: Gained carrier May 27 03:21:35.844666 systemd-networkd[1472]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:21:35.845904 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:21:35.847464 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:21:35.847495 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:21:35.847518 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 03:21:35.847528 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:21:35.847646 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 27 03:21:35.848167 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:21:35.848709 systemd[1]: Reached target time-set.target - System Time Set. May 27 03:21:35.855239 systemd-networkd[1472]: eth1: Link UP May 27 03:21:35.855774 systemd-networkd[1472]: eth1: Gained carrier May 27 03:21:35.855797 systemd-networkd[1472]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:21:35.856501 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 03:21:35.858624 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 03:21:35.864258 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:21:35.868905 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:21:35.872963 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:21:35.874389 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:21:35.877410 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:21:35.885243 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 May 27 03:21:35.885294 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console May 27 03:21:35.886699 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:21:35.886898 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:21:35.888423 systemd-networkd[1472]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 03:21:35.889184 systemd-timesyncd[1468]: Network configuration changed, trying to establish connection. May 27 03:21:35.889333 kernel: Console: switching to colour dummy device 80x25 May 27 03:21:35.892471 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:21:35.893902 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 27 03:21:35.893935 kernel: [drm] features: -context_init May 27 03:21:35.897627 systemd-resolved[1433]: Positive Trust Anchors: May 27 03:21:35.897646 systemd-resolved[1433]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:21:35.897676 systemd-resolved[1433]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:21:35.901375 systemd-networkd[1472]: eth0: DHCPv4 address 157.180.65.55/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 27 03:21:35.901718 systemd-timesyncd[1468]: Network configuration changed, trying to establish connection. May 27 03:21:35.901886 systemd-timesyncd[1468]: Network configuration changed, trying to establish connection. May 27 03:21:35.904132 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 03:21:35.906451 systemd-resolved[1433]: Using system hostname 'ci-4344-0-0-e-876c439243'. May 27 03:21:35.908041 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:21:35.908145 systemd[1]: Reached target network.target - Network. May 27 03:21:35.908183 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:21:35.908243 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:21:35.908373 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 03:21:35.908432 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 03:21:35.908471 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 27 03:21:35.908624 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 03:21:35.908766 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 03:21:35.908822 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 03:21:35.908862 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 03:21:35.908883 systemd[1]: Reached target paths.target - Path Units. May 27 03:21:35.908916 systemd[1]: Reached target timers.target - Timer Units. May 27 03:21:35.910080 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 03:21:35.911388 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 03:21:35.914289 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 03:21:35.915156 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 03:21:35.915212 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 03:21:35.919736 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 03:21:35.920619 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 03:21:35.921260 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 03:21:35.921911 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:21:35.922355 systemd[1]: Reached target basic.target - Basic System. May 27 03:21:35.922451 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 03:21:35.922468 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 03:21:35.923906 systemd[1]: Starting containerd.service - containerd container runtime... May 27 03:21:35.926510 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 27 03:21:35.927664 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 03:21:35.929519 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 03:21:35.934993 kernel: [drm] number of scanouts: 1 May 27 03:21:35.931494 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 03:21:35.940048 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 03:21:35.940135 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 03:21:35.949842 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 27 03:21:35.950957 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 03:21:35.953552 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 03:21:35.954991 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. May 27 03:21:35.957326 kernel: [drm] number of cap sets: 0 May 27 03:21:35.958572 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 03:21:35.961688 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 03:21:35.967611 google_oslogin_nss_cache[1545]: oslogin_cache_refresh[1545]: Refreshing passwd entry cache May 27 03:21:35.964835 oslogin_cache_refresh[1545]: Refreshing passwd entry cache May 27 03:21:35.968532 jq[1541]: false May 27 03:21:35.969493 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 03:21:35.970288 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 03:21:35.971779 google_oslogin_nss_cache[1545]: oslogin_cache_refresh[1545]: Failure getting users, quitting May 27 03:21:35.971843 oslogin_cache_refresh[1545]: Failure getting users, quitting May 27 03:21:35.972280 google_oslogin_nss_cache[1545]: oslogin_cache_refresh[1545]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 03:21:35.972280 google_oslogin_nss_cache[1545]: oslogin_cache_refresh[1545]: Refreshing group entry cache May 27 03:21:35.971901 oslogin_cache_refresh[1545]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 03:21:35.971946 oslogin_cache_refresh[1545]: Refreshing group entry cache May 27 03:21:35.974100 google_oslogin_nss_cache[1545]: oslogin_cache_refresh[1545]: Failure getting groups, quitting May 27 03:21:35.974100 google_oslogin_nss_cache[1545]: oslogin_cache_refresh[1545]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 03:21:35.973381 oslogin_cache_refresh[1545]: Failure getting groups, quitting May 27 03:21:35.973387 oslogin_cache_refresh[1545]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 03:21:35.974609 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 03:21:35.977481 systemd[1]: Starting update-engine.service - Update Engine... May 27 03:21:35.980813 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 03:21:35.983764 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 03:21:35.984535 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 03:21:35.984713 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 03:21:35.984929 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 27 03:21:35.985067 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 27 03:21:35.992751 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 03:21:35.993413 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 03:21:36.013960 update_engine[1552]: I20250527 03:21:36.013168 1552 main.cc:92] Flatcar Update Engine starting May 27 03:21:36.014634 (ntainerd)[1560]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 03:21:36.032764 extend-filesystems[1542]: Found loop4 May 27 03:21:36.034778 extend-filesystems[1542]: Found loop5 May 27 03:21:36.034778 extend-filesystems[1542]: Found loop6 May 27 03:21:36.034778 extend-filesystems[1542]: Found loop7 May 27 03:21:36.034778 extend-filesystems[1542]: Found sda May 27 03:21:36.034778 extend-filesystems[1542]: Found sda1 May 27 03:21:36.034778 extend-filesystems[1542]: Found sda2 May 27 03:21:36.034778 extend-filesystems[1542]: Found sda3 May 27 03:21:36.034778 extend-filesystems[1542]: Found usr May 27 03:21:36.034778 extend-filesystems[1542]: Found sda4 May 27 03:21:36.034778 extend-filesystems[1542]: Found sda6 May 27 03:21:36.034778 extend-filesystems[1542]: Found sda7 May 27 03:21:36.034778 extend-filesystems[1542]: Found sda9 May 27 03:21:36.034778 extend-filesystems[1542]: Checking size of /dev/sda9 May 27 03:21:36.039213 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 27 03:21:36.039422 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 27 03:21:36.046441 jq[1553]: true May 27 03:21:36.063342 coreos-metadata[1538]: May 27 03:21:36.062 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 May 27 03:21:36.070863 coreos-metadata[1538]: May 27 03:21:36.070 INFO Fetch successful May 27 03:21:36.071081 coreos-metadata[1538]: May 27 03:21:36.071 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 May 27 03:21:36.073392 coreos-metadata[1538]: May 27 03:21:36.072 INFO Fetch successful May 27 03:21:36.083473 dbus-daemon[1539]: [system] SELinux support is enabled May 27 03:21:36.150380 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks May 27 03:21:36.150407 extend-filesystems[1542]: Resized partition /dev/sda9 May 27 03:21:36.083704 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 03:21:36.150570 update_engine[1552]: I20250527 03:21:36.086072 1552 update_check_scheduler.cc:74] Next update check in 9m53s May 27 03:21:36.150595 tar[1555]: linux-amd64/helm May 27 03:21:36.151820 jq[1578]: true May 27 03:21:36.154200 extend-filesystems[1589]: resize2fs 1.47.2 (1-Jan-2025) May 27 03:21:36.088065 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 03:21:36.088095 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 03:21:36.088593 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 03:21:36.088610 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 03:21:36.153419 systemd[1]: motdgen.service: Deactivated successfully. May 27 03:21:36.153595 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 03:21:36.157193 systemd[1]: Started update-engine.service - Update Engine. May 27 03:21:36.184724 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 May 27 03:21:36.185361 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 03:21:36.326401 bash[1625]: Updated "/home/core/.ssh/authorized_keys" May 27 03:21:36.325122 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 03:21:36.330578 systemd[1]: Starting sshkeys.service... May 27 03:21:36.332205 systemd-logind[1551]: New seat seat0. May 27 03:21:36.337405 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 27 03:21:36.337687 systemd[1]: Started systemd-logind.service - User Login Management. May 27 03:21:36.338136 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 03:21:36.344347 kernel: EXT4-fs (sda9): resized filesystem to 9393147 May 27 03:21:36.375573 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 27 03:21:36.377817 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 27 03:21:36.391119 extend-filesystems[1589]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 27 03:21:36.391119 extend-filesystems[1589]: old_desc_blocks = 1, new_desc_blocks = 5 May 27 03:21:36.391119 extend-filesystems[1589]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. May 27 03:21:36.395664 extend-filesystems[1542]: Resized filesystem in /dev/sda9 May 27 03:21:36.395664 extend-filesystems[1542]: Found sr0 May 27 03:21:36.392810 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 03:21:36.393042 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 03:21:36.438286 containerd[1560]: time="2025-05-27T03:21:36Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 03:21:36.439092 containerd[1560]: time="2025-05-27T03:21:36.439073306Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 03:21:36.446334 kernel: EDAC MC: Ver: 3.0.0 May 27 03:21:36.450246 containerd[1560]: time="2025-05-27T03:21:36.450200081Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.956µs" May 27 03:21:36.450348 containerd[1560]: time="2025-05-27T03:21:36.450335555Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 03:21:36.450410 containerd[1560]: time="2025-05-27T03:21:36.450400957Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 03:21:36.450594 containerd[1560]: time="2025-05-27T03:21:36.450580124Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 03:21:36.450640 containerd[1560]: time="2025-05-27T03:21:36.450631881Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 03:21:36.450689 containerd[1560]: time="2025-05-27T03:21:36.450680682Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:21:36.450782 containerd[1560]: time="2025-05-27T03:21:36.450769008Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:21:36.450822 containerd[1560]: time="2025-05-27T03:21:36.450814072Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:21:36.451104 containerd[1560]: time="2025-05-27T03:21:36.451087054Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:21:36.451153 containerd[1560]: time="2025-05-27T03:21:36.451144752Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:21:36.451192 containerd[1560]: time="2025-05-27T03:21:36.451183135Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:21:36.451245 containerd[1560]: time="2025-05-27T03:21:36.451236825Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 03:21:36.451377 containerd[1560]: time="2025-05-27T03:21:36.451364374Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 03:21:36.451613 containerd[1560]: time="2025-05-27T03:21:36.451598253Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:21:36.451674 containerd[1560]: time="2025-05-27T03:21:36.451663656Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:21:36.451722 containerd[1560]: time="2025-05-27T03:21:36.451711695Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 03:21:36.451810 containerd[1560]: time="2025-05-27T03:21:36.451794341Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 03:21:36.452112 containerd[1560]: time="2025-05-27T03:21:36.452081179Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 03:21:36.452236 containerd[1560]: time="2025-05-27T03:21:36.452207265Z" level=info msg="metadata content store policy set" policy=shared May 27 03:21:36.458737 containerd[1560]: time="2025-05-27T03:21:36.458676466Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 03:21:36.459754 containerd[1560]: time="2025-05-27T03:21:36.458835905Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 03:21:36.459754 containerd[1560]: time="2025-05-27T03:21:36.458852316Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 03:21:36.459754 containerd[1560]: time="2025-05-27T03:21:36.458868105Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 03:21:36.459754 containerd[1560]: time="2025-05-27T03:21:36.459361620Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 03:21:36.459754 containerd[1560]: time="2025-05-27T03:21:36.459383782Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 03:21:36.459754 containerd[1560]: time="2025-05-27T03:21:36.459399091Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 03:21:36.459754 containerd[1560]: time="2025-05-27T03:21:36.459409470Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 03:21:36.459754 containerd[1560]: time="2025-05-27T03:21:36.459418347Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 03:21:36.459754 containerd[1560]: time="2025-05-27T03:21:36.459428997Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 03:21:36.459754 containerd[1560]: time="2025-05-27T03:21:36.459438705Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 03:21:36.459754 containerd[1560]: time="2025-05-27T03:21:36.459455396Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 03:21:36.459754 containerd[1560]: time="2025-05-27T03:21:36.459559351Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 03:21:36.459754 containerd[1560]: time="2025-05-27T03:21:36.459576734Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 03:21:36.459754 containerd[1560]: time="2025-05-27T03:21:36.459589699Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 03:21:36.460023 containerd[1560]: time="2025-05-27T03:21:36.459598324Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 03:21:36.460023 containerd[1560]: time="2025-05-27T03:21:36.459607111Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 03:21:36.460023 containerd[1560]: time="2025-05-27T03:21:36.459615567Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 03:21:36.460023 containerd[1560]: time="2025-05-27T03:21:36.459625025Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 03:21:36.460023 containerd[1560]: time="2025-05-27T03:21:36.459633741Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 03:21:36.460023 containerd[1560]: time="2025-05-27T03:21:36.459643088Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 03:21:36.460023 containerd[1560]: time="2025-05-27T03:21:36.459651584Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 03:21:36.460023 containerd[1560]: time="2025-05-27T03:21:36.459660942Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 03:21:36.460023 containerd[1560]: time="2025-05-27T03:21:36.459721185Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 03:21:36.460023 containerd[1560]: time="2025-05-27T03:21:36.459733237Z" level=info msg="Start snapshots syncer" May 27 03:21:36.460611 containerd[1560]: time="2025-05-27T03:21:36.460238455Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 03:21:36.460611 containerd[1560]: time="2025-05-27T03:21:36.460521626Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 03:21:36.460730 containerd[1560]: time="2025-05-27T03:21:36.460564987Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 03:21:36.460804 containerd[1560]: time="2025-05-27T03:21:36.460791332Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 03:21:36.460948 containerd[1560]: time="2025-05-27T03:21:36.460933688Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 03:21:36.461188 containerd[1560]: time="2025-05-27T03:21:36.460993400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 03:21:36.461188 containerd[1560]: time="2025-05-27T03:21:36.461004572Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 03:21:36.461188 containerd[1560]: time="2025-05-27T03:21:36.461014600Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 03:21:36.461188 containerd[1560]: time="2025-05-27T03:21:36.461024779Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 03:21:36.461188 containerd[1560]: time="2025-05-27T03:21:36.461033506Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 03:21:36.461188 containerd[1560]: time="2025-05-27T03:21:36.461042252Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 03:21:36.461188 containerd[1560]: time="2025-05-27T03:21:36.461061829Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 03:21:36.461188 containerd[1560]: time="2025-05-27T03:21:36.461076256Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 03:21:36.461188 containerd[1560]: time="2025-05-27T03:21:36.461085313Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 03:21:36.462521 containerd[1560]: time="2025-05-27T03:21:36.462212196Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:21:36.462521 containerd[1560]: time="2025-05-27T03:21:36.462247863Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:21:36.462521 containerd[1560]: time="2025-05-27T03:21:36.462298588Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:21:36.462521 containerd[1560]: time="2025-05-27T03:21:36.462330688Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:21:36.462521 containerd[1560]: time="2025-05-27T03:21:36.462337761Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 03:21:36.462521 containerd[1560]: time="2025-05-27T03:21:36.462346288Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 03:21:36.462521 containerd[1560]: time="2025-05-27T03:21:36.462354543Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 03:21:36.462521 containerd[1560]: time="2025-05-27T03:21:36.462367738Z" level=info msg="runtime interface created" May 27 03:21:36.462521 containerd[1560]: time="2025-05-27T03:21:36.462371925Z" level=info msg="created NRI interface" May 27 03:21:36.462521 containerd[1560]: time="2025-05-27T03:21:36.462378478Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 03:21:36.462521 containerd[1560]: time="2025-05-27T03:21:36.462387354Z" level=info msg="Connect containerd service" May 27 03:21:36.462521 containerd[1560]: time="2025-05-27T03:21:36.462407151Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 03:21:36.464024 containerd[1560]: time="2025-05-27T03:21:36.463778934Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 03:21:36.484154 coreos-metadata[1631]: May 27 03:21:36.482 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 May 27 03:21:36.484154 coreos-metadata[1631]: May 27 03:21:36.483 INFO Fetch successful May 27 03:21:36.496603 unknown[1631]: wrote ssh authorized keys file for user: core May 27 03:21:36.552709 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:21:36.561975 update-ssh-keys[1647]: Updated "/home/core/.ssh/authorized_keys" May 27 03:21:36.562552 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 27 03:21:36.567249 systemd[1]: Finished sshkeys.service. May 27 03:21:36.578491 sshd_keygen[1580]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 03:21:36.607699 systemd-logind[1551]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 03:21:36.637688 systemd-logind[1551]: Watching system buttons on /dev/input/event3 (Power Button) May 27 03:21:36.670043 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 03:21:36.670579 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:21:36.671154 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:21:36.673262 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 03:21:36.678283 containerd[1560]: time="2025-05-27T03:21:36.674426741Z" level=info msg="Start subscribing containerd event" May 27 03:21:36.678283 containerd[1560]: time="2025-05-27T03:21:36.674480341Z" level=info msg="Start recovering state" May 27 03:21:36.678283 containerd[1560]: time="2025-05-27T03:21:36.674561193Z" level=info msg="Start event monitor" May 27 03:21:36.678283 containerd[1560]: time="2025-05-27T03:21:36.674573917Z" level=info msg="Start cni network conf syncer for default" May 27 03:21:36.678283 containerd[1560]: time="2025-05-27T03:21:36.674584046Z" level=info msg="Start streaming server" May 27 03:21:36.678283 containerd[1560]: time="2025-05-27T03:21:36.674593494Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 03:21:36.678283 containerd[1560]: time="2025-05-27T03:21:36.674599745Z" level=info msg="runtime interface starting up..." May 27 03:21:36.678283 containerd[1560]: time="2025-05-27T03:21:36.674604755Z" level=info msg="starting plugins..." May 27 03:21:36.678283 containerd[1560]: time="2025-05-27T03:21:36.674616486Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 03:21:36.678283 containerd[1560]: time="2025-05-27T03:21:36.675478894Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 03:21:36.678283 containerd[1560]: time="2025-05-27T03:21:36.675553784Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 03:21:36.679416 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 03:21:36.680470 containerd[1560]: time="2025-05-27T03:21:36.679528477Z" level=info msg="containerd successfully booted in 0.241623s" May 27 03:21:36.684419 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:21:36.684792 systemd[1]: Started containerd.service - containerd container runtime. May 27 03:21:36.729104 systemd[1]: issuegen.service: Deactivated successfully. May 27 03:21:36.729435 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 03:21:36.734857 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 03:21:36.791821 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 03:21:36.796824 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 03:21:36.799607 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 03:21:36.799801 systemd[1]: Reached target getty.target - Login Prompts. May 27 03:21:36.821542 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:21:36.831235 locksmithd[1598]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 03:21:37.014090 tar[1555]: linux-amd64/LICENSE May 27 03:21:37.014384 tar[1555]: linux-amd64/README.md May 27 03:21:37.028672 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 03:21:37.065576 systemd-networkd[1472]: eth1: Gained IPv6LL May 27 03:21:37.066432 systemd-timesyncd[1468]: Network configuration changed, trying to establish connection. May 27 03:21:37.068238 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 03:21:37.069043 systemd[1]: Reached target network-online.target - Network is Online. May 27 03:21:37.070866 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:21:37.072477 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 03:21:37.096665 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 03:21:37.129517 systemd-networkd[1472]: eth0: Gained IPv6LL May 27 03:21:37.130922 systemd-timesyncd[1468]: Network configuration changed, trying to establish connection. May 27 03:21:38.370936 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:21:38.372526 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 03:21:38.374480 systemd[1]: Startup finished in 3.580s (kernel) + 12.799s (initrd) + 4.954s (userspace) = 21.335s. May 27 03:21:38.381073 (kubelet)[1713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:21:39.215232 kubelet[1713]: E0527 03:21:39.215133 1713 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:21:39.218194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:21:39.218549 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:21:39.219048 systemd[1]: kubelet.service: Consumed 1.482s CPU time, 265M memory peak. May 27 03:21:49.469492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 03:21:49.473113 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:21:49.654380 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:21:49.662520 (kubelet)[1732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:21:49.708964 kubelet[1732]: E0527 03:21:49.708847 1732 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:21:49.715745 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:21:49.715876 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:21:49.716121 systemd[1]: kubelet.service: Consumed 190ms CPU time, 111.4M memory peak. May 27 03:21:50.746673 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 03:21:50.748518 systemd[1]: Started sshd@0-157.180.65.55:22-194.165.16.162:65188.service - OpenSSH per-connection server daemon (194.165.16.162:65188). May 27 03:21:50.796117 sshd[1741]: banner exchange: Connection from 194.165.16.162 port 65188: invalid format May 27 03:21:50.796822 systemd[1]: sshd@0-157.180.65.55:22-194.165.16.162:65188.service: Deactivated successfully. May 27 03:21:59.897755 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 27 03:21:59.900116 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:22:00.039950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:22:00.050692 (kubelet)[1752]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:22:00.108529 kubelet[1752]: E0527 03:22:00.108458 1752 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:22:00.111779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:22:00.111908 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:22:00.112167 systemd[1]: kubelet.service: Consumed 168ms CPU time, 109.1M memory peak. May 27 03:22:07.371536 systemd-timesyncd[1468]: Contacted time server 78.46.204.247:123 (2.flatcar.pool.ntp.org). May 27 03:22:07.371618 systemd-timesyncd[1468]: Initial clock synchronization to Tue 2025-05-27 03:22:07.175635 UTC. May 27 03:22:10.147109 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 27 03:22:10.149006 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:22:10.388252 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:22:10.399660 (kubelet)[1768]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:22:10.453773 kubelet[1768]: E0527 03:22:10.453691 1768 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:22:10.456897 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:22:10.457115 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:22:10.457609 systemd[1]: kubelet.service: Consumed 206ms CPU time, 109M memory peak. May 27 03:22:20.647564 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 27 03:22:20.650190 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:22:20.807183 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:22:20.819638 (kubelet)[1783]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:22:20.865558 kubelet[1783]: E0527 03:22:20.865457 1783 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:22:20.868271 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:22:20.868552 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:22:20.869036 systemd[1]: kubelet.service: Consumed 161ms CPU time, 108.6M memory peak. May 27 03:22:21.073619 update_engine[1552]: I20250527 03:22:21.073431 1552 update_attempter.cc:509] Updating boot flags... May 27 03:22:30.897163 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 27 03:22:30.899590 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:22:31.078536 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:22:31.083516 (kubelet)[1823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:22:31.121362 kubelet[1823]: E0527 03:22:31.121273 1823 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:22:31.124162 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:22:31.124648 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:22:31.125556 systemd[1]: kubelet.service: Consumed 147ms CPU time, 110.4M memory peak. May 27 03:22:41.147564 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 27 03:22:41.149998 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:22:41.331819 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:22:41.340544 (kubelet)[1838]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:22:41.383473 kubelet[1838]: E0527 03:22:41.383391 1838 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:22:41.387229 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:22:41.387440 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:22:41.387803 systemd[1]: kubelet.service: Consumed 185ms CPU time, 110.3M memory peak. May 27 03:22:51.397292 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. May 27 03:22:51.399900 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:22:51.596015 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:22:51.608646 (kubelet)[1853]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:22:51.642226 kubelet[1853]: E0527 03:22:51.642142 1853 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:22:51.644360 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:22:51.644585 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:22:51.644995 systemd[1]: kubelet.service: Consumed 178ms CPU time, 110.2M memory peak. May 27 03:23:01.647168 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. May 27 03:23:01.648797 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:23:01.806239 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:23:01.817654 (kubelet)[1868]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:23:01.855636 kubelet[1868]: E0527 03:23:01.855590 1868 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:23:01.857910 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:23:01.858165 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:23:01.858559 systemd[1]: kubelet.service: Consumed 157ms CPU time, 108.4M memory peak. May 27 03:23:11.897562 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. May 27 03:23:11.900076 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:23:12.112071 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:23:12.121600 (kubelet)[1884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:23:12.166260 kubelet[1884]: E0527 03:23:12.166141 1884 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:23:12.168784 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:23:12.169006 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:23:12.169477 systemd[1]: kubelet.service: Consumed 194ms CPU time, 110.4M memory peak. May 27 03:23:17.012629 systemd[1]: Started sshd@1-157.180.65.55:22-139.178.89.65:58244.service - OpenSSH per-connection server daemon (139.178.89.65:58244). May 27 03:23:18.022832 sshd[1892]: Accepted publickey for core from 139.178.89.65 port 58244 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:23:18.025844 sshd-session[1892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:23:18.037863 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 03:23:18.039992 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 03:23:18.054417 systemd-logind[1551]: New session 1 of user core. May 27 03:23:18.074605 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 03:23:18.079226 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 03:23:18.097614 (systemd)[1896]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 03:23:18.101038 systemd-logind[1551]: New session c1 of user core. May 27 03:23:18.277840 systemd[1896]: Queued start job for default target default.target. May 27 03:23:18.284161 systemd[1896]: Created slice app.slice - User Application Slice. May 27 03:23:18.284189 systemd[1896]: Reached target paths.target - Paths. May 27 03:23:18.284294 systemd[1896]: Reached target timers.target - Timers. May 27 03:23:18.285529 systemd[1896]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 03:23:18.303371 systemd[1896]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 03:23:18.303571 systemd[1896]: Reached target sockets.target - Sockets. May 27 03:23:18.303642 systemd[1896]: Reached target basic.target - Basic System. May 27 03:23:18.303695 systemd[1896]: Reached target default.target - Main User Target. May 27 03:23:18.303735 systemd[1896]: Startup finished in 196ms. May 27 03:23:18.303927 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 03:23:18.316635 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 03:23:19.003383 systemd[1]: Started sshd@2-157.180.65.55:22-139.178.89.65:58250.service - OpenSSH per-connection server daemon (139.178.89.65:58250). May 27 03:23:19.983058 sshd[1907]: Accepted publickey for core from 139.178.89.65 port 58250 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:23:19.984894 sshd-session[1907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:23:19.992639 systemd-logind[1551]: New session 2 of user core. May 27 03:23:20.001699 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 03:23:20.662944 sshd[1909]: Connection closed by 139.178.89.65 port 58250 May 27 03:23:20.663837 sshd-session[1907]: pam_unix(sshd:session): session closed for user core May 27 03:23:20.668200 systemd[1]: sshd@2-157.180.65.55:22-139.178.89.65:58250.service: Deactivated successfully. May 27 03:23:20.671171 systemd[1]: session-2.scope: Deactivated successfully. May 27 03:23:20.674573 systemd-logind[1551]: Session 2 logged out. Waiting for processes to exit. May 27 03:23:20.676666 systemd-logind[1551]: Removed session 2. May 27 03:23:20.834131 systemd[1]: Started sshd@3-157.180.65.55:22-139.178.89.65:58258.service - OpenSSH per-connection server daemon (139.178.89.65:58258). May 27 03:23:21.818858 sshd[1915]: Accepted publickey for core from 139.178.89.65 port 58258 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:23:21.820538 sshd-session[1915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:23:21.827037 systemd-logind[1551]: New session 3 of user core. May 27 03:23:21.839459 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 03:23:22.397952 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. May 27 03:23:22.401118 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:23:22.492339 sshd[1917]: Connection closed by 139.178.89.65 port 58258 May 27 03:23:22.493536 sshd-session[1915]: pam_unix(sshd:session): session closed for user core May 27 03:23:22.499112 systemd[1]: sshd@3-157.180.65.55:22-139.178.89.65:58258.service: Deactivated successfully. May 27 03:23:22.502821 systemd[1]: session-3.scope: Deactivated successfully. May 27 03:23:22.507187 systemd-logind[1551]: Session 3 logged out. Waiting for processes to exit. May 27 03:23:22.510278 systemd-logind[1551]: Removed session 3. May 27 03:23:22.578896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:23:22.599924 (kubelet)[1930]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:23:22.645251 kubelet[1930]: E0527 03:23:22.645168 1930 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:23:22.648522 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:23:22.648795 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:23:22.649904 systemd[1]: kubelet.service: Consumed 187ms CPU time, 110.1M memory peak. May 27 03:23:22.671740 systemd[1]: Started sshd@4-157.180.65.55:22-139.178.89.65:58264.service - OpenSSH per-connection server daemon (139.178.89.65:58264). May 27 03:23:23.671159 sshd[1938]: Accepted publickey for core from 139.178.89.65 port 58264 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:23:23.672464 sshd-session[1938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:23:23.678116 systemd-logind[1551]: New session 4 of user core. May 27 03:23:23.686668 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 03:23:24.350731 sshd[1940]: Connection closed by 139.178.89.65 port 58264 May 27 03:23:24.351356 sshd-session[1938]: pam_unix(sshd:session): session closed for user core May 27 03:23:24.355289 systemd-logind[1551]: Session 4 logged out. Waiting for processes to exit. May 27 03:23:24.355576 systemd[1]: sshd@4-157.180.65.55:22-139.178.89.65:58264.service: Deactivated successfully. May 27 03:23:24.357234 systemd[1]: session-4.scope: Deactivated successfully. May 27 03:23:24.358414 systemd-logind[1551]: Removed session 4. May 27 03:23:24.516870 systemd[1]: Started sshd@5-157.180.65.55:22-139.178.89.65:45658.service - OpenSSH per-connection server daemon (139.178.89.65:45658). May 27 03:23:25.513368 sshd[1946]: Accepted publickey for core from 139.178.89.65 port 45658 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:23:25.515213 sshd-session[1946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:23:25.523856 systemd-logind[1551]: New session 5 of user core. May 27 03:23:25.526532 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 03:23:26.046421 sudo[1949]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 03:23:26.046873 sudo[1949]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:23:26.064088 sudo[1949]: pam_unix(sudo:session): session closed for user root May 27 03:23:26.222355 sshd[1948]: Connection closed by 139.178.89.65 port 45658 May 27 03:23:26.223623 sshd-session[1946]: pam_unix(sshd:session): session closed for user core May 27 03:23:26.230072 systemd[1]: sshd@5-157.180.65.55:22-139.178.89.65:45658.service: Deactivated successfully. May 27 03:23:26.232713 systemd[1]: session-5.scope: Deactivated successfully. May 27 03:23:26.234562 systemd-logind[1551]: Session 5 logged out. Waiting for processes to exit. May 27 03:23:26.236605 systemd-logind[1551]: Removed session 5. May 27 03:23:26.399134 systemd[1]: Started sshd@6-157.180.65.55:22-139.178.89.65:45664.service - OpenSSH per-connection server daemon (139.178.89.65:45664). May 27 03:23:27.404441 sshd[1955]: Accepted publickey for core from 139.178.89.65 port 45664 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:23:27.406271 sshd-session[1955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:23:27.414527 systemd-logind[1551]: New session 6 of user core. May 27 03:23:27.421534 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 03:23:27.923358 sudo[1959]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 03:23:27.923828 sudo[1959]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:23:27.930700 sudo[1959]: pam_unix(sudo:session): session closed for user root May 27 03:23:27.938806 sudo[1958]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 03:23:27.939216 sudo[1958]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:23:27.955111 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:23:28.004612 augenrules[1981]: No rules May 27 03:23:28.005887 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:23:28.006140 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:23:28.007637 sudo[1958]: pam_unix(sudo:session): session closed for user root May 27 03:23:28.165276 sshd[1957]: Connection closed by 139.178.89.65 port 45664 May 27 03:23:28.166136 sshd-session[1955]: pam_unix(sshd:session): session closed for user core May 27 03:23:28.171570 systemd[1]: sshd@6-157.180.65.55:22-139.178.89.65:45664.service: Deactivated successfully. May 27 03:23:28.173924 systemd[1]: session-6.scope: Deactivated successfully. May 27 03:23:28.175888 systemd-logind[1551]: Session 6 logged out. Waiting for processes to exit. May 27 03:23:28.178029 systemd-logind[1551]: Removed session 6. May 27 03:23:28.349690 systemd[1]: Started sshd@7-157.180.65.55:22-139.178.89.65:45672.service - OpenSSH per-connection server daemon (139.178.89.65:45672). May 27 03:23:29.354843 sshd[1990]: Accepted publickey for core from 139.178.89.65 port 45672 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:23:29.357038 sshd-session[1990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:23:29.366748 systemd-logind[1551]: New session 7 of user core. May 27 03:23:29.369621 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 03:23:29.876965 sudo[1993]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 03:23:29.877401 sudo[1993]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:23:30.364798 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 03:23:30.388003 (dockerd)[2011]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 03:23:30.769671 dockerd[2011]: time="2025-05-27T03:23:30.769578587Z" level=info msg="Starting up" May 27 03:23:30.772545 dockerd[2011]: time="2025-05-27T03:23:30.772463798Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 03:23:30.861989 dockerd[2011]: time="2025-05-27T03:23:30.861775988Z" level=info msg="Loading containers: start." May 27 03:23:30.873332 kernel: Initializing XFRM netlink socket May 27 03:23:31.149587 systemd-networkd[1472]: docker0: Link UP May 27 03:23:31.155354 dockerd[2011]: time="2025-05-27T03:23:31.155253273Z" level=info msg="Loading containers: done." May 27 03:23:31.175836 dockerd[2011]: time="2025-05-27T03:23:31.175749135Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 03:23:31.176030 dockerd[2011]: time="2025-05-27T03:23:31.175858189Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 03:23:31.176030 dockerd[2011]: time="2025-05-27T03:23:31.175963287Z" level=info msg="Initializing buildkit" May 27 03:23:31.210638 dockerd[2011]: time="2025-05-27T03:23:31.210566523Z" level=info msg="Completed buildkit initialization" May 27 03:23:31.217643 dockerd[2011]: time="2025-05-27T03:23:31.217568755Z" level=info msg="Daemon has completed initialization" May 27 03:23:31.217850 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 03:23:31.218398 dockerd[2011]: time="2025-05-27T03:23:31.218159407Z" level=info msg="API listen on /run/docker.sock" May 27 03:23:32.480111 containerd[1560]: time="2025-05-27T03:23:32.480040142Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 27 03:23:32.897502 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. May 27 03:23:32.901412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:23:33.061984 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:23:33.070700 (kubelet)[2223]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:23:33.108388 kubelet[2223]: E0527 03:23:33.108252 2223 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:23:33.115726 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:23:33.115831 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:23:33.116036 systemd[1]: kubelet.service: Consumed 168ms CPU time, 108.7M memory peak. May 27 03:23:33.126429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2670602296.mount: Deactivated successfully. May 27 03:23:35.122486 containerd[1560]: time="2025-05-27T03:23:35.122410018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:35.123750 containerd[1560]: time="2025-05-27T03:23:35.123709453Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=28078939" May 27 03:23:35.125316 containerd[1560]: time="2025-05-27T03:23:35.125263519Z" level=info msg="ImageCreate event name:\"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:35.128217 containerd[1560]: time="2025-05-27T03:23:35.128146335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:35.129007 containerd[1560]: time="2025-05-27T03:23:35.128801136Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"28075645\" in 2.648673189s" May 27 03:23:35.129007 containerd[1560]: time="2025-05-27T03:23:35.128838376Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 27 03:23:35.129821 containerd[1560]: time="2025-05-27T03:23:35.129801499Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 27 03:23:37.089260 containerd[1560]: time="2025-05-27T03:23:37.089183244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:37.090436 containerd[1560]: time="2025-05-27T03:23:37.090397879Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=24713544" May 27 03:23:37.091340 containerd[1560]: time="2025-05-27T03:23:37.091287945Z" level=info msg="ImageCreate event name:\"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:37.093520 containerd[1560]: time="2025-05-27T03:23:37.093478567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:37.094506 containerd[1560]: time="2025-05-27T03:23:37.094335409Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"26315362\" in 1.964424172s" May 27 03:23:37.094506 containerd[1560]: time="2025-05-27T03:23:37.094360828Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 27 03:23:37.095036 containerd[1560]: time="2025-05-27T03:23:37.094996203Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 27 03:23:38.719546 containerd[1560]: time="2025-05-27T03:23:38.719482081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:38.720559 containerd[1560]: time="2025-05-27T03:23:38.720515045Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=18784333" May 27 03:23:38.721338 containerd[1560]: time="2025-05-27T03:23:38.721289613Z" level=info msg="ImageCreate event name:\"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:38.724619 containerd[1560]: time="2025-05-27T03:23:38.724562441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:38.725990 containerd[1560]: time="2025-05-27T03:23:38.725873559Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"20386169\" in 1.630834284s" May 27 03:23:38.725990 containerd[1560]: time="2025-05-27T03:23:38.725917982Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 27 03:23:38.726502 containerd[1560]: time="2025-05-27T03:23:38.726471433Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 27 03:23:39.819488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1407731703.mount: Deactivated successfully. May 27 03:23:40.295880 containerd[1560]: time="2025-05-27T03:23:40.295765906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:40.296866 containerd[1560]: time="2025-05-27T03:23:40.296830029Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=30355651" May 27 03:23:40.297729 containerd[1560]: time="2025-05-27T03:23:40.297651455Z" level=info msg="ImageCreate event name:\"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:40.298997 containerd[1560]: time="2025-05-27T03:23:40.298955919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:40.299343 containerd[1560]: time="2025-05-27T03:23:40.299320564Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"30354642\" in 1.572817591s" May 27 03:23:40.299414 containerd[1560]: time="2025-05-27T03:23:40.299402570Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 27 03:23:40.299929 containerd[1560]: time="2025-05-27T03:23:40.299818502Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 27 03:23:40.787986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount26172584.mount: Deactivated successfully. May 27 03:23:41.593457 containerd[1560]: time="2025-05-27T03:23:41.593401447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:41.594500 containerd[1560]: time="2025-05-27T03:23:41.594468816Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565335" May 27 03:23:41.595517 containerd[1560]: time="2025-05-27T03:23:41.595470882Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:41.597661 containerd[1560]: time="2025-05-27T03:23:41.597620726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:41.598825 containerd[1560]: time="2025-05-27T03:23:41.598686281Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.298666882s" May 27 03:23:41.598825 containerd[1560]: time="2025-05-27T03:23:41.598723441Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 27 03:23:41.599632 containerd[1560]: time="2025-05-27T03:23:41.599604038Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 03:23:42.054247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount438888064.mount: Deactivated successfully. May 27 03:23:42.062120 containerd[1560]: time="2025-05-27T03:23:42.062049566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:23:42.063234 containerd[1560]: time="2025-05-27T03:23:42.063196544Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" May 27 03:23:42.065083 containerd[1560]: time="2025-05-27T03:23:42.065046475Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:23:42.068905 containerd[1560]: time="2025-05-27T03:23:42.068786461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:23:42.070041 containerd[1560]: time="2025-05-27T03:23:42.069856846Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 470.220978ms" May 27 03:23:42.070041 containerd[1560]: time="2025-05-27T03:23:42.069903454Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 27 03:23:42.071144 containerd[1560]: time="2025-05-27T03:23:42.070932499Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 27 03:23:42.587743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3630059700.mount: Deactivated successfully. May 27 03:23:43.147499 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. May 27 03:23:43.150831 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:23:43.310431 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:23:43.319561 (kubelet)[2417]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:23:43.363427 kubelet[2417]: E0527 03:23:43.363367 2417 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:23:43.366465 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:23:43.366717 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:23:43.367173 systemd[1]: kubelet.service: Consumed 154ms CPU time, 107.8M memory peak. May 27 03:23:43.970417 containerd[1560]: time="2025-05-27T03:23:43.970359915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:43.971469 containerd[1560]: time="2025-05-27T03:23:43.971438793Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780083" May 27 03:23:43.972636 containerd[1560]: time="2025-05-27T03:23:43.972597531Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:43.975156 containerd[1560]: time="2025-05-27T03:23:43.975103741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:43.976326 containerd[1560]: time="2025-05-27T03:23:43.976080989Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 1.905113114s" May 27 03:23:43.976326 containerd[1560]: time="2025-05-27T03:23:43.976114672Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 27 03:23:46.767938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:23:46.768173 systemd[1]: kubelet.service: Consumed 154ms CPU time, 107.8M memory peak. May 27 03:23:46.771238 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:23:46.813112 systemd[1]: Reload requested from client PID 2453 ('systemctl') (unit session-7.scope)... May 27 03:23:46.813136 systemd[1]: Reloading... May 27 03:23:46.927369 zram_generator::config[2500]: No configuration found. May 27 03:23:47.015971 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:23:47.134932 systemd[1]: Reloading finished in 321 ms. May 27 03:23:47.167135 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 03:23:47.167212 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 03:23:47.167606 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:23:47.167667 systemd[1]: kubelet.service: Consumed 110ms CPU time, 98.1M memory peak. May 27 03:23:47.169923 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:23:47.338136 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:23:47.349741 (kubelet)[2550]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:23:47.416291 kubelet[2550]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:23:47.416291 kubelet[2550]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 27 03:23:47.416291 kubelet[2550]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:23:47.416769 kubelet[2550]: I0527 03:23:47.416427 2550 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:23:48.030223 kubelet[2550]: I0527 03:23:48.030164 2550 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 27 03:23:48.030223 kubelet[2550]: I0527 03:23:48.030208 2550 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:23:48.030759 kubelet[2550]: I0527 03:23:48.030537 2550 server.go:934] "Client rotation is on, will bootstrap in background" May 27 03:23:48.078191 kubelet[2550]: I0527 03:23:48.078141 2550 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:23:48.078607 kubelet[2550]: E0527 03:23:48.078569 2550 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://157.180.65.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 157.180.65.55:6443: connect: connection refused" logger="UnhandledError" May 27 03:23:48.090706 kubelet[2550]: I0527 03:23:48.090525 2550 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:23:48.100153 kubelet[2550]: I0527 03:23:48.100097 2550 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:23:48.103799 kubelet[2550]: I0527 03:23:48.103751 2550 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 27 03:23:48.104073 kubelet[2550]: I0527 03:23:48.104016 2550 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:23:48.104378 kubelet[2550]: I0527 03:23:48.104062 2550 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344-0-0-e-876c439243","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:23:48.104570 kubelet[2550]: I0527 03:23:48.104383 2550 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:23:48.104570 kubelet[2550]: I0527 03:23:48.104402 2550 container_manager_linux.go:300] "Creating device plugin manager" May 27 03:23:48.105460 kubelet[2550]: I0527 03:23:48.105425 2550 state_mem.go:36] "Initialized new in-memory state store" May 27 03:23:48.109577 kubelet[2550]: I0527 03:23:48.109429 2550 kubelet.go:408] "Attempting to sync node with API server" May 27 03:23:48.109577 kubelet[2550]: I0527 03:23:48.109463 2550 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:23:48.111857 kubelet[2550]: I0527 03:23:48.111831 2550 kubelet.go:314] "Adding apiserver pod source" May 27 03:23:48.112193 kubelet[2550]: I0527 03:23:48.111968 2550 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:23:48.119686 kubelet[2550]: W0527 03:23:48.119589 2550 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://157.180.65.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344-0-0-e-876c439243&limit=500&resourceVersion=0": dial tcp 157.180.65.55:6443: connect: connection refused May 27 03:23:48.119825 kubelet[2550]: E0527 03:23:48.119715 2550 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://157.180.65.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344-0-0-e-876c439243&limit=500&resourceVersion=0\": dial tcp 157.180.65.55:6443: connect: connection refused" logger="UnhandledError" May 27 03:23:48.124298 kubelet[2550]: W0527 03:23:48.124224 2550 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://157.180.65.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 157.180.65.55:6443: connect: connection refused May 27 03:23:48.124460 kubelet[2550]: E0527 03:23:48.124346 2550 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://157.180.65.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 157.180.65.55:6443: connect: connection refused" logger="UnhandledError" May 27 03:23:48.126335 kubelet[2550]: I0527 03:23:48.124505 2550 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:23:48.129251 kubelet[2550]: I0527 03:23:48.129214 2550 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 03:23:48.130210 kubelet[2550]: W0527 03:23:48.130181 2550 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 03:23:48.132749 kubelet[2550]: I0527 03:23:48.132721 2550 server.go:1274] "Started kubelet" May 27 03:23:48.134832 kubelet[2550]: I0527 03:23:48.134267 2550 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:23:48.139267 kubelet[2550]: I0527 03:23:48.139221 2550 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:23:48.141576 kubelet[2550]: I0527 03:23:48.141562 2550 server.go:449] "Adding debug handlers to kubelet server" May 27 03:23:48.143933 kubelet[2550]: E0527 03:23:48.140111 2550 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://157.180.65.55:6443/api/v1/namespaces/default/events\": dial tcp 157.180.65.55:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344-0-0-e-876c439243.18434458b4e63467 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344-0-0-e-876c439243,UID:ci-4344-0-0-e-876c439243,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344-0-0-e-876c439243,},FirstTimestamp:2025-05-27 03:23:48.132680807 +0000 UTC m=+0.778465509,LastTimestamp:2025-05-27 03:23:48.132680807 +0000 UTC m=+0.778465509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344-0-0-e-876c439243,}" May 27 03:23:48.146321 kubelet[2550]: I0527 03:23:48.144533 2550 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:23:48.146321 kubelet[2550]: I0527 03:23:48.145993 2550 volume_manager.go:289] "Starting Kubelet Volume Manager" May 27 03:23:48.146321 kubelet[2550]: I0527 03:23:48.146179 2550 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:23:48.146321 kubelet[2550]: E0527 03:23:48.146273 2550 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344-0-0-e-876c439243\" not found" May 27 03:23:48.146549 kubelet[2550]: I0527 03:23:48.146534 2550 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:23:48.151358 kubelet[2550]: E0527 03:23:48.151270 2550 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.65.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344-0-0-e-876c439243?timeout=10s\": dial tcp 157.180.65.55:6443: connect: connection refused" interval="200ms" May 27 03:23:48.154433 kubelet[2550]: I0527 03:23:48.154180 2550 reconciler.go:26] "Reconciler: start to sync state" May 27 03:23:48.154433 kubelet[2550]: I0527 03:23:48.154352 2550 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 27 03:23:48.155979 kubelet[2550]: W0527 03:23:48.155920 2550 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://157.180.65.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.180.65.55:6443: connect: connection refused May 27 03:23:48.156039 kubelet[2550]: E0527 03:23:48.155983 2550 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://157.180.65.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 157.180.65.55:6443: connect: connection refused" logger="UnhandledError" May 27 03:23:48.158043 kubelet[2550]: I0527 03:23:48.158010 2550 factory.go:221] Registration of the systemd container factory successfully May 27 03:23:48.159578 kubelet[2550]: I0527 03:23:48.159466 2550 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:23:48.161287 kubelet[2550]: I0527 03:23:48.161263 2550 factory.go:221] Registration of the containerd container factory successfully May 27 03:23:48.162388 kubelet[2550]: E0527 03:23:48.162265 2550 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 03:23:48.165992 kubelet[2550]: I0527 03:23:48.165891 2550 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 03:23:48.166949 kubelet[2550]: I0527 03:23:48.166926 2550 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 03:23:48.166949 kubelet[2550]: I0527 03:23:48.166950 2550 status_manager.go:217] "Starting to sync pod status with apiserver" May 27 03:23:48.167009 kubelet[2550]: I0527 03:23:48.166970 2550 kubelet.go:2321] "Starting kubelet main sync loop" May 27 03:23:48.167028 kubelet[2550]: E0527 03:23:48.167003 2550 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:23:48.171992 kubelet[2550]: W0527 03:23:48.171940 2550 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://157.180.65.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.180.65.55:6443: connect: connection refused May 27 03:23:48.171992 kubelet[2550]: E0527 03:23:48.171986 2550 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://157.180.65.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 157.180.65.55:6443: connect: connection refused" logger="UnhandledError" May 27 03:23:48.187046 kubelet[2550]: I0527 03:23:48.187006 2550 cpu_manager.go:214] "Starting CPU manager" policy="none" May 27 03:23:48.187046 kubelet[2550]: I0527 03:23:48.187040 2550 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 27 03:23:48.187046 kubelet[2550]: I0527 03:23:48.187053 2550 state_mem.go:36] "Initialized new in-memory state store" May 27 03:23:48.189390 kubelet[2550]: I0527 03:23:48.189372 2550 policy_none.go:49] "None policy: Start" May 27 03:23:48.189841 kubelet[2550]: I0527 03:23:48.189824 2550 memory_manager.go:170] "Starting memorymanager" policy="None" May 27 03:23:48.189841 kubelet[2550]: I0527 03:23:48.189841 2550 state_mem.go:35] "Initializing new in-memory state store" May 27 03:23:48.202427 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 03:23:48.218197 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 03:23:48.221725 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 03:23:48.229326 kubelet[2550]: I0527 03:23:48.229277 2550 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 03:23:48.230154 kubelet[2550]: I0527 03:23:48.230045 2550 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:23:48.231370 kubelet[2550]: I0527 03:23:48.230246 2550 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:23:48.231370 kubelet[2550]: I0527 03:23:48.230840 2550 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:23:48.233854 kubelet[2550]: E0527 03:23:48.233816 2550 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344-0-0-e-876c439243\" not found" May 27 03:23:48.296581 systemd[1]: Created slice kubepods-burstable-poda2f63f32d5be5898d941acc7d3001772.slice - libcontainer container kubepods-burstable-poda2f63f32d5be5898d941acc7d3001772.slice. May 27 03:23:48.318123 systemd[1]: Created slice kubepods-burstable-pod790d5218cc4954efd5205153c6b2d4a4.slice - libcontainer container kubepods-burstable-pod790d5218cc4954efd5205153c6b2d4a4.slice. May 27 03:23:48.325062 systemd[1]: Created slice kubepods-burstable-pod3524bf4447a75dd2615c142cb08e7478.slice - libcontainer container kubepods-burstable-pod3524bf4447a75dd2615c142cb08e7478.slice. May 27 03:23:48.333799 kubelet[2550]: I0527 03:23:48.333477 2550 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344-0-0-e-876c439243" May 27 03:23:48.334075 kubelet[2550]: E0527 03:23:48.334031 2550 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://157.180.65.55:6443/api/v1/nodes\": dial tcp 157.180.65.55:6443: connect: connection refused" node="ci-4344-0-0-e-876c439243" May 27 03:23:48.352833 kubelet[2550]: E0527 03:23:48.352751 2550 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.65.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344-0-0-e-876c439243?timeout=10s\": dial tcp 157.180.65.55:6443: connect: connection refused" interval="400ms" May 27 03:23:48.456369 kubelet[2550]: I0527 03:23:48.456168 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a2f63f32d5be5898d941acc7d3001772-flexvolume-dir\") pod \"kube-controller-manager-ci-4344-0-0-e-876c439243\" (UID: \"a2f63f32d5be5898d941acc7d3001772\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-e-876c439243" May 27 03:23:48.456369 kubelet[2550]: I0527 03:23:48.456263 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a2f63f32d5be5898d941acc7d3001772-kubeconfig\") pod \"kube-controller-manager-ci-4344-0-0-e-876c439243\" (UID: \"a2f63f32d5be5898d941acc7d3001772\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-e-876c439243" May 27 03:23:48.456369 kubelet[2550]: I0527 03:23:48.456355 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3524bf4447a75dd2615c142cb08e7478-kubeconfig\") pod \"kube-scheduler-ci-4344-0-0-e-876c439243\" (UID: \"3524bf4447a75dd2615c142cb08e7478\") " pod="kube-system/kube-scheduler-ci-4344-0-0-e-876c439243" May 27 03:23:48.457181 kubelet[2550]: I0527 03:23:48.456400 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/790d5218cc4954efd5205153c6b2d4a4-ca-certs\") pod \"kube-apiserver-ci-4344-0-0-e-876c439243\" (UID: \"790d5218cc4954efd5205153c6b2d4a4\") " pod="kube-system/kube-apiserver-ci-4344-0-0-e-876c439243" May 27 03:23:48.457181 kubelet[2550]: I0527 03:23:48.456438 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a2f63f32d5be5898d941acc7d3001772-ca-certs\") pod \"kube-controller-manager-ci-4344-0-0-e-876c439243\" (UID: \"a2f63f32d5be5898d941acc7d3001772\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-e-876c439243" May 27 03:23:48.457181 kubelet[2550]: I0527 03:23:48.456475 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a2f63f32d5be5898d941acc7d3001772-k8s-certs\") pod \"kube-controller-manager-ci-4344-0-0-e-876c439243\" (UID: \"a2f63f32d5be5898d941acc7d3001772\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-e-876c439243" May 27 03:23:48.457181 kubelet[2550]: I0527 03:23:48.456513 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a2f63f32d5be5898d941acc7d3001772-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344-0-0-e-876c439243\" (UID: \"a2f63f32d5be5898d941acc7d3001772\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-e-876c439243" May 27 03:23:48.457181 kubelet[2550]: I0527 03:23:48.456553 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/790d5218cc4954efd5205153c6b2d4a4-k8s-certs\") pod \"kube-apiserver-ci-4344-0-0-e-876c439243\" (UID: \"790d5218cc4954efd5205153c6b2d4a4\") " pod="kube-system/kube-apiserver-ci-4344-0-0-e-876c439243" May 27 03:23:48.457483 kubelet[2550]: I0527 03:23:48.456594 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/790d5218cc4954efd5205153c6b2d4a4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344-0-0-e-876c439243\" (UID: \"790d5218cc4954efd5205153c6b2d4a4\") " pod="kube-system/kube-apiserver-ci-4344-0-0-e-876c439243" May 27 03:23:48.536982 kubelet[2550]: I0527 03:23:48.536939 2550 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344-0-0-e-876c439243" May 27 03:23:48.537599 kubelet[2550]: E0527 03:23:48.537556 2550 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://157.180.65.55:6443/api/v1/nodes\": dial tcp 157.180.65.55:6443: connect: connection refused" node="ci-4344-0-0-e-876c439243" May 27 03:23:48.615016 containerd[1560]: time="2025-05-27T03:23:48.614903527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344-0-0-e-876c439243,Uid:a2f63f32d5be5898d941acc7d3001772,Namespace:kube-system,Attempt:0,}" May 27 03:23:48.627734 containerd[1560]: time="2025-05-27T03:23:48.627678660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344-0-0-e-876c439243,Uid:790d5218cc4954efd5205153c6b2d4a4,Namespace:kube-system,Attempt:0,}" May 27 03:23:48.628681 containerd[1560]: time="2025-05-27T03:23:48.628546269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344-0-0-e-876c439243,Uid:3524bf4447a75dd2615c142cb08e7478,Namespace:kube-system,Attempt:0,}" May 27 03:23:48.737743 containerd[1560]: time="2025-05-27T03:23:48.737660725Z" level=info msg="connecting to shim 77c3ce4fbd82588ca97a1c4c80006b546232f8c1aa240e09bde88a3f4fd7108a" address="unix:///run/containerd/s/d8050da4f413ed34d20727aa25dde3886b9cb57d4342a477ce28266d9c56173b" namespace=k8s.io protocol=ttrpc version=3 May 27 03:23:48.738267 containerd[1560]: time="2025-05-27T03:23:48.738240816Z" level=info msg="connecting to shim 36ee42ca221c418cae2323cb96fdcba3a24c205318c6b0294b5fd6ff2d315ec0" address="unix:///run/containerd/s/04d21bb9c25765254ed003929cd21a58370394b98c7523fec908880503f66532" namespace=k8s.io protocol=ttrpc version=3 May 27 03:23:48.753494 kubelet[2550]: E0527 03:23:48.753429 2550 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.65.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344-0-0-e-876c439243?timeout=10s\": dial tcp 157.180.65.55:6443: connect: connection refused" interval="800ms" May 27 03:23:48.755021 containerd[1560]: time="2025-05-27T03:23:48.754925223Z" level=info msg="connecting to shim a3e3014fadbab106ddb47608e8bfd87d94c6c669c6c718507f314f1e3fb803fa" address="unix:///run/containerd/s/f38018bb9e1c00c95dd49bfe4c5b31895ee6122aa316af913a54f424357c939f" namespace=k8s.io protocol=ttrpc version=3 May 27 03:23:48.830823 systemd[1]: Started cri-containerd-36ee42ca221c418cae2323cb96fdcba3a24c205318c6b0294b5fd6ff2d315ec0.scope - libcontainer container 36ee42ca221c418cae2323cb96fdcba3a24c205318c6b0294b5fd6ff2d315ec0. May 27 03:23:48.832479 systemd[1]: Started cri-containerd-77c3ce4fbd82588ca97a1c4c80006b546232f8c1aa240e09bde88a3f4fd7108a.scope - libcontainer container 77c3ce4fbd82588ca97a1c4c80006b546232f8c1aa240e09bde88a3f4fd7108a. May 27 03:23:48.837055 systemd[1]: Started cri-containerd-a3e3014fadbab106ddb47608e8bfd87d94c6c669c6c718507f314f1e3fb803fa.scope - libcontainer container a3e3014fadbab106ddb47608e8bfd87d94c6c669c6c718507f314f1e3fb803fa. May 27 03:23:48.908563 containerd[1560]: time="2025-05-27T03:23:48.908526614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344-0-0-e-876c439243,Uid:790d5218cc4954efd5205153c6b2d4a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"77c3ce4fbd82588ca97a1c4c80006b546232f8c1aa240e09bde88a3f4fd7108a\"" May 27 03:23:48.912701 containerd[1560]: time="2025-05-27T03:23:48.912422983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344-0-0-e-876c439243,Uid:3524bf4447a75dd2615c142cb08e7478,Namespace:kube-system,Attempt:0,} returns sandbox id \"36ee42ca221c418cae2323cb96fdcba3a24c205318c6b0294b5fd6ff2d315ec0\"" May 27 03:23:48.913658 containerd[1560]: time="2025-05-27T03:23:48.913634603Z" level=info msg="CreateContainer within sandbox \"77c3ce4fbd82588ca97a1c4c80006b546232f8c1aa240e09bde88a3f4fd7108a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 03:23:48.920322 containerd[1560]: time="2025-05-27T03:23:48.920074178Z" level=info msg="CreateContainer within sandbox \"36ee42ca221c418cae2323cb96fdcba3a24c205318c6b0294b5fd6ff2d315ec0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 03:23:48.926579 containerd[1560]: time="2025-05-27T03:23:48.926546421Z" level=info msg="Container 903f789bd31d396056a501ba51338aa381bdaeceed50cdcd494fbd438ed1c16e: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:48.930261 containerd[1560]: time="2025-05-27T03:23:48.930155475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344-0-0-e-876c439243,Uid:a2f63f32d5be5898d941acc7d3001772,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3e3014fadbab106ddb47608e8bfd87d94c6c669c6c718507f314f1e3fb803fa\"" May 27 03:23:48.931446 containerd[1560]: time="2025-05-27T03:23:48.931426686Z" level=info msg="Container 389cdffc1ab0eedf86b2f748186cff29bb3b1b73855795f5c53aec09b414f52b: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:48.933737 containerd[1560]: time="2025-05-27T03:23:48.933693936Z" level=info msg="CreateContainer within sandbox \"a3e3014fadbab106ddb47608e8bfd87d94c6c669c6c718507f314f1e3fb803fa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 03:23:48.935245 containerd[1560]: time="2025-05-27T03:23:48.935216848Z" level=info msg="CreateContainer within sandbox \"77c3ce4fbd82588ca97a1c4c80006b546232f8c1aa240e09bde88a3f4fd7108a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"903f789bd31d396056a501ba51338aa381bdaeceed50cdcd494fbd438ed1c16e\"" May 27 03:23:48.935857 containerd[1560]: time="2025-05-27T03:23:48.935842825Z" level=info msg="StartContainer for \"903f789bd31d396056a501ba51338aa381bdaeceed50cdcd494fbd438ed1c16e\"" May 27 03:23:48.938285 containerd[1560]: time="2025-05-27T03:23:48.938262880Z" level=info msg="connecting to shim 903f789bd31d396056a501ba51338aa381bdaeceed50cdcd494fbd438ed1c16e" address="unix:///run/containerd/s/d8050da4f413ed34d20727aa25dde3886b9cb57d4342a477ce28266d9c56173b" protocol=ttrpc version=3 May 27 03:23:48.939891 containerd[1560]: time="2025-05-27T03:23:48.939856504Z" level=info msg="CreateContainer within sandbox \"36ee42ca221c418cae2323cb96fdcba3a24c205318c6b0294b5fd6ff2d315ec0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"389cdffc1ab0eedf86b2f748186cff29bb3b1b73855795f5c53aec09b414f52b\"" May 27 03:23:48.940311 kubelet[2550]: I0527 03:23:48.940246 2550 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344-0-0-e-876c439243" May 27 03:23:48.940451 containerd[1560]: time="2025-05-27T03:23:48.940368749Z" level=info msg="StartContainer for \"389cdffc1ab0eedf86b2f748186cff29bb3b1b73855795f5c53aec09b414f52b\"" May 27 03:23:48.941107 containerd[1560]: time="2025-05-27T03:23:48.941087440Z" level=info msg="connecting to shim 389cdffc1ab0eedf86b2f748186cff29bb3b1b73855795f5c53aec09b414f52b" address="unix:///run/containerd/s/04d21bb9c25765254ed003929cd21a58370394b98c7523fec908880503f66532" protocol=ttrpc version=3 May 27 03:23:48.941674 kubelet[2550]: E0527 03:23:48.941649 2550 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://157.180.65.55:6443/api/v1/nodes\": dial tcp 157.180.65.55:6443: connect: connection refused" node="ci-4344-0-0-e-876c439243" May 27 03:23:48.945833 containerd[1560]: time="2025-05-27T03:23:48.945779794Z" level=info msg="Container 33d7a427bc912c1d8108d17f503f8a08ba85d7bf96901a9d7952ed252cd738f6: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:48.956228 containerd[1560]: time="2025-05-27T03:23:48.956178744Z" level=info msg="CreateContainer within sandbox \"a3e3014fadbab106ddb47608e8bfd87d94c6c669c6c718507f314f1e3fb803fa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"33d7a427bc912c1d8108d17f503f8a08ba85d7bf96901a9d7952ed252cd738f6\"" May 27 03:23:48.957689 containerd[1560]: time="2025-05-27T03:23:48.956673988Z" level=info msg="StartContainer for \"33d7a427bc912c1d8108d17f503f8a08ba85d7bf96901a9d7952ed252cd738f6\"" May 27 03:23:48.957689 containerd[1560]: time="2025-05-27T03:23:48.957603682Z" level=info msg="connecting to shim 33d7a427bc912c1d8108d17f503f8a08ba85d7bf96901a9d7952ed252cd738f6" address="unix:///run/containerd/s/f38018bb9e1c00c95dd49bfe4c5b31895ee6122aa316af913a54f424357c939f" protocol=ttrpc version=3 May 27 03:23:48.959561 systemd[1]: Started cri-containerd-389cdffc1ab0eedf86b2f748186cff29bb3b1b73855795f5c53aec09b414f52b.scope - libcontainer container 389cdffc1ab0eedf86b2f748186cff29bb3b1b73855795f5c53aec09b414f52b. May 27 03:23:48.969799 systemd[1]: Started cri-containerd-903f789bd31d396056a501ba51338aa381bdaeceed50cdcd494fbd438ed1c16e.scope - libcontainer container 903f789bd31d396056a501ba51338aa381bdaeceed50cdcd494fbd438ed1c16e. May 27 03:23:48.987472 systemd[1]: Started cri-containerd-33d7a427bc912c1d8108d17f503f8a08ba85d7bf96901a9d7952ed252cd738f6.scope - libcontainer container 33d7a427bc912c1d8108d17f503f8a08ba85d7bf96901a9d7952ed252cd738f6. May 27 03:23:49.032351 kubelet[2550]: W0527 03:23:49.032262 2550 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://157.180.65.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 157.180.65.55:6443: connect: connection refused May 27 03:23:49.032542 kubelet[2550]: E0527 03:23:49.032523 2550 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://157.180.65.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 157.180.65.55:6443: connect: connection refused" logger="UnhandledError" May 27 03:23:49.055485 containerd[1560]: time="2025-05-27T03:23:49.055440701Z" level=info msg="StartContainer for \"389cdffc1ab0eedf86b2f748186cff29bb3b1b73855795f5c53aec09b414f52b\" returns successfully" May 27 03:23:49.059719 containerd[1560]: time="2025-05-27T03:23:49.059675974Z" level=info msg="StartContainer for \"903f789bd31d396056a501ba51338aa381bdaeceed50cdcd494fbd438ed1c16e\" returns successfully" May 27 03:23:49.066346 kubelet[2550]: W0527 03:23:49.066052 2550 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://157.180.65.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.180.65.55:6443: connect: connection refused May 27 03:23:49.066750 kubelet[2550]: E0527 03:23:49.066676 2550 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://157.180.65.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 157.180.65.55:6443: connect: connection refused" logger="UnhandledError" May 27 03:23:49.076590 containerd[1560]: time="2025-05-27T03:23:49.076557104Z" level=info msg="StartContainer for \"33d7a427bc912c1d8108d17f503f8a08ba85d7bf96901a9d7952ed252cd738f6\" returns successfully" May 27 03:23:49.111227 kubelet[2550]: W0527 03:23:49.111157 2550 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://157.180.65.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.180.65.55:6443: connect: connection refused May 27 03:23:49.111227 kubelet[2550]: E0527 03:23:49.111230 2550 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://157.180.65.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 157.180.65.55:6443: connect: connection refused" logger="UnhandledError" May 27 03:23:49.744742 kubelet[2550]: I0527 03:23:49.744708 2550 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344-0-0-e-876c439243" May 27 03:23:50.771332 kubelet[2550]: E0527 03:23:50.771269 2550 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344-0-0-e-876c439243\" not found" node="ci-4344-0-0-e-876c439243" May 27 03:23:50.862261 kubelet[2550]: I0527 03:23:50.862193 2550 kubelet_node_status.go:75] "Successfully registered node" node="ci-4344-0-0-e-876c439243" May 27 03:23:51.125689 kubelet[2550]: I0527 03:23:51.124964 2550 apiserver.go:52] "Watching apiserver" May 27 03:23:51.154986 kubelet[2550]: I0527 03:23:51.154872 2550 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 27 03:23:53.307477 systemd[1]: Reload requested from client PID 2822 ('systemctl') (unit session-7.scope)... May 27 03:23:53.307499 systemd[1]: Reloading... May 27 03:23:53.394372 zram_generator::config[2862]: No configuration found. May 27 03:23:53.498703 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:23:53.647823 systemd[1]: Reloading finished in 339 ms. May 27 03:23:53.673719 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:23:53.681362 systemd[1]: kubelet.service: Deactivated successfully. May 27 03:23:53.681594 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:23:53.681646 systemd[1]: kubelet.service: Consumed 1.242s CPU time, 130.3M memory peak. May 27 03:23:53.684618 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:23:53.876518 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:23:53.882555 (kubelet)[2917]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:23:53.949384 kubelet[2917]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:23:53.949384 kubelet[2917]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 27 03:23:53.949384 kubelet[2917]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:23:53.949866 kubelet[2917]: I0527 03:23:53.949501 2917 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:23:53.957430 kubelet[2917]: I0527 03:23:53.957388 2917 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 27 03:23:53.957430 kubelet[2917]: I0527 03:23:53.957417 2917 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:23:53.957674 kubelet[2917]: I0527 03:23:53.957662 2917 server.go:934] "Client rotation is on, will bootstrap in background" May 27 03:23:53.959214 kubelet[2917]: I0527 03:23:53.958907 2917 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 27 03:23:53.972920 kubelet[2917]: I0527 03:23:53.972875 2917 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:23:53.977780 kubelet[2917]: I0527 03:23:53.977761 2917 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:23:53.981358 kubelet[2917]: I0527 03:23:53.981343 2917 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:23:53.981556 kubelet[2917]: I0527 03:23:53.981515 2917 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 27 03:23:53.981742 kubelet[2917]: I0527 03:23:53.981720 2917 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:23:53.982085 kubelet[2917]: I0527 03:23:53.981798 2917 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344-0-0-e-876c439243","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:23:53.982225 kubelet[2917]: I0527 03:23:53.982216 2917 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:23:53.982276 kubelet[2917]: I0527 03:23:53.982270 2917 container_manager_linux.go:300] "Creating device plugin manager" May 27 03:23:53.982361 kubelet[2917]: I0527 03:23:53.982353 2917 state_mem.go:36] "Initialized new in-memory state store" May 27 03:23:53.982523 kubelet[2917]: I0527 03:23:53.982512 2917 kubelet.go:408] "Attempting to sync node with API server" May 27 03:23:53.982651 kubelet[2917]: I0527 03:23:53.982594 2917 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:23:53.982735 kubelet[2917]: I0527 03:23:53.982727 2917 kubelet.go:314] "Adding apiserver pod source" May 27 03:23:53.982792 kubelet[2917]: I0527 03:23:53.982785 2917 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:23:53.991596 kubelet[2917]: I0527 03:23:53.991563 2917 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:23:53.992126 kubelet[2917]: I0527 03:23:53.992111 2917 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 03:23:53.994683 kubelet[2917]: I0527 03:23:53.994669 2917 server.go:1274] "Started kubelet" May 27 03:23:53.998466 kubelet[2917]: I0527 03:23:53.998453 2917 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:23:54.006518 kubelet[2917]: I0527 03:23:54.006488 2917 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:23:54.011726 kubelet[2917]: I0527 03:23:54.007267 2917 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:23:54.012003 kubelet[2917]: I0527 03:23:54.007637 2917 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:23:54.012045 kubelet[2917]: I0527 03:23:54.008935 2917 volume_manager.go:289] "Starting Kubelet Volume Manager" May 27 03:23:54.013060 kubelet[2917]: I0527 03:23:54.012563 2917 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:23:54.013060 kubelet[2917]: E0527 03:23:54.009056 2917 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344-0-0-e-876c439243\" not found" May 27 03:23:54.013060 kubelet[2917]: I0527 03:23:54.008947 2917 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 27 03:23:54.013675 kubelet[2917]: I0527 03:23:54.013648 2917 server.go:449] "Adding debug handlers to kubelet server" May 27 03:23:54.014291 kubelet[2917]: I0527 03:23:54.014273 2917 reconciler.go:26] "Reconciler: start to sync state" May 27 03:23:54.015339 kubelet[2917]: E0527 03:23:54.015282 2917 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 03:23:54.017059 kubelet[2917]: I0527 03:23:54.016983 2917 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 03:23:54.017342 kubelet[2917]: I0527 03:23:54.017099 2917 factory.go:221] Registration of the containerd container factory successfully May 27 03:23:54.017342 kubelet[2917]: I0527 03:23:54.017343 2917 factory.go:221] Registration of the systemd container factory successfully May 27 03:23:54.018139 kubelet[2917]: I0527 03:23:54.018128 2917 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 03:23:54.018195 kubelet[2917]: I0527 03:23:54.018189 2917 status_manager.go:217] "Starting to sync pod status with apiserver" May 27 03:23:54.018272 kubelet[2917]: I0527 03:23:54.018266 2917 kubelet.go:2321] "Starting kubelet main sync loop" May 27 03:23:54.018508 kubelet[2917]: E0527 03:23:54.018489 2917 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:23:54.020730 kubelet[2917]: I0527 03:23:54.018169 2917 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:23:54.052103 kubelet[2917]: I0527 03:23:54.051861 2917 cpu_manager.go:214] "Starting CPU manager" policy="none" May 27 03:23:54.052103 kubelet[2917]: I0527 03:23:54.051876 2917 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 27 03:23:54.052103 kubelet[2917]: I0527 03:23:54.051891 2917 state_mem.go:36] "Initialized new in-memory state store" May 27 03:23:54.052103 kubelet[2917]: I0527 03:23:54.052030 2917 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 03:23:54.052103 kubelet[2917]: I0527 03:23:54.052040 2917 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 03:23:54.052103 kubelet[2917]: I0527 03:23:54.052057 2917 policy_none.go:49] "None policy: Start" May 27 03:23:54.052904 kubelet[2917]: I0527 03:23:54.052871 2917 memory_manager.go:170] "Starting memorymanager" policy="None" May 27 03:23:54.052954 kubelet[2917]: I0527 03:23:54.052910 2917 state_mem.go:35] "Initializing new in-memory state store" May 27 03:23:54.053137 kubelet[2917]: I0527 03:23:54.053115 2917 state_mem.go:75] "Updated machine memory state" May 27 03:23:54.057454 kubelet[2917]: I0527 03:23:54.057422 2917 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 03:23:54.057585 kubelet[2917]: I0527 03:23:54.057560 2917 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:23:54.057612 kubelet[2917]: I0527 03:23:54.057589 2917 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:23:54.058161 kubelet[2917]: I0527 03:23:54.058021 2917 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:23:54.130624 kubelet[2917]: E0527 03:23:54.130555 2917 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4344-0-0-e-876c439243\" already exists" pod="kube-system/kube-scheduler-ci-4344-0-0-e-876c439243" May 27 03:23:54.130859 kubelet[2917]: E0527 03:23:54.130724 2917 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4344-0-0-e-876c439243\" already exists" pod="kube-system/kube-controller-manager-ci-4344-0-0-e-876c439243" May 27 03:23:54.162727 kubelet[2917]: I0527 03:23:54.162643 2917 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344-0-0-e-876c439243" May 27 03:23:54.173971 kubelet[2917]: I0527 03:23:54.173896 2917 kubelet_node_status.go:111] "Node was previously registered" node="ci-4344-0-0-e-876c439243" May 27 03:23:54.174398 kubelet[2917]: I0527 03:23:54.174018 2917 kubelet_node_status.go:75] "Successfully registered node" node="ci-4344-0-0-e-876c439243" May 27 03:23:54.216551 kubelet[2917]: I0527 03:23:54.216079 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a2f63f32d5be5898d941acc7d3001772-ca-certs\") pod \"kube-controller-manager-ci-4344-0-0-e-876c439243\" (UID: \"a2f63f32d5be5898d941acc7d3001772\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-e-876c439243" May 27 03:23:54.216551 kubelet[2917]: I0527 03:23:54.216138 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a2f63f32d5be5898d941acc7d3001772-k8s-certs\") pod \"kube-controller-manager-ci-4344-0-0-e-876c439243\" (UID: \"a2f63f32d5be5898d941acc7d3001772\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-e-876c439243" May 27 03:23:54.216551 kubelet[2917]: I0527 03:23:54.216168 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a2f63f32d5be5898d941acc7d3001772-kubeconfig\") pod \"kube-controller-manager-ci-4344-0-0-e-876c439243\" (UID: \"a2f63f32d5be5898d941acc7d3001772\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-e-876c439243" May 27 03:23:54.216551 kubelet[2917]: I0527 03:23:54.216202 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/790d5218cc4954efd5205153c6b2d4a4-k8s-certs\") pod \"kube-apiserver-ci-4344-0-0-e-876c439243\" (UID: \"790d5218cc4954efd5205153c6b2d4a4\") " pod="kube-system/kube-apiserver-ci-4344-0-0-e-876c439243" May 27 03:23:54.216551 kubelet[2917]: I0527 03:23:54.216241 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/790d5218cc4954efd5205153c6b2d4a4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344-0-0-e-876c439243\" (UID: \"790d5218cc4954efd5205153c6b2d4a4\") " pod="kube-system/kube-apiserver-ci-4344-0-0-e-876c439243" May 27 03:23:54.216917 kubelet[2917]: I0527 03:23:54.216276 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a2f63f32d5be5898d941acc7d3001772-flexvolume-dir\") pod \"kube-controller-manager-ci-4344-0-0-e-876c439243\" (UID: \"a2f63f32d5be5898d941acc7d3001772\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-e-876c439243" May 27 03:23:54.216917 kubelet[2917]: I0527 03:23:54.216342 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a2f63f32d5be5898d941acc7d3001772-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344-0-0-e-876c439243\" (UID: \"a2f63f32d5be5898d941acc7d3001772\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-e-876c439243" May 27 03:23:54.216917 kubelet[2917]: I0527 03:23:54.216376 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3524bf4447a75dd2615c142cb08e7478-kubeconfig\") pod \"kube-scheduler-ci-4344-0-0-e-876c439243\" (UID: \"3524bf4447a75dd2615c142cb08e7478\") " pod="kube-system/kube-scheduler-ci-4344-0-0-e-876c439243" May 27 03:23:54.216917 kubelet[2917]: I0527 03:23:54.216417 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/790d5218cc4954efd5205153c6b2d4a4-ca-certs\") pod \"kube-apiserver-ci-4344-0-0-e-876c439243\" (UID: \"790d5218cc4954efd5205153c6b2d4a4\") " pod="kube-system/kube-apiserver-ci-4344-0-0-e-876c439243" May 27 03:23:54.984083 kubelet[2917]: I0527 03:23:54.984029 2917 apiserver.go:52] "Watching apiserver" May 27 03:23:55.013481 kubelet[2917]: I0527 03:23:55.013432 2917 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 27 03:23:55.047323 kubelet[2917]: E0527 03:23:55.046701 2917 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4344-0-0-e-876c439243\" already exists" pod="kube-system/kube-scheduler-ci-4344-0-0-e-876c439243" May 27 03:23:55.062651 kubelet[2917]: I0527 03:23:55.062475 2917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344-0-0-e-876c439243" podStartSLOduration=1.062429189 podStartE2EDuration="1.062429189s" podCreationTimestamp="2025-05-27 03:23:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:23:55.062215429 +0000 UTC m=+1.174016067" watchObservedRunningTime="2025-05-27 03:23:55.062429189 +0000 UTC m=+1.174229827" May 27 03:23:55.077989 kubelet[2917]: I0527 03:23:55.077922 2917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344-0-0-e-876c439243" podStartSLOduration=2.077900012 podStartE2EDuration="2.077900012s" podCreationTimestamp="2025-05-27 03:23:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:23:55.07015366 +0000 UTC m=+1.181954298" watchObservedRunningTime="2025-05-27 03:23:55.077900012 +0000 UTC m=+1.189700650" May 27 03:23:55.086725 kubelet[2917]: I0527 03:23:55.086653 2917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344-0-0-e-876c439243" podStartSLOduration=3.086629531 podStartE2EDuration="3.086629531s" podCreationTimestamp="2025-05-27 03:23:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:23:55.078408062 +0000 UTC m=+1.190208700" watchObservedRunningTime="2025-05-27 03:23:55.086629531 +0000 UTC m=+1.198430169" May 27 03:23:59.515966 kubelet[2917]: I0527 03:23:59.515908 2917 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 03:23:59.518375 containerd[1560]: time="2025-05-27T03:23:59.517057396Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 03:23:59.519018 kubelet[2917]: I0527 03:23:59.518869 2917 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 03:23:59.751293 systemd[1]: Created slice kubepods-besteffort-pod7bb096f5_9bb4_431e_ada7_1b094e1c9059.slice - libcontainer container kubepods-besteffort-pod7bb096f5_9bb4_431e_ada7_1b094e1c9059.slice. May 27 03:23:59.752060 kubelet[2917]: I0527 03:23:59.751741 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bb096f5-9bb4-431e-ada7-1b094e1c9059-xtables-lock\") pod \"kube-proxy-6gm7b\" (UID: \"7bb096f5-9bb4-431e-ada7-1b094e1c9059\") " pod="kube-system/kube-proxy-6gm7b" May 27 03:23:59.752060 kubelet[2917]: I0527 03:23:59.751778 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fs8w\" (UniqueName: \"kubernetes.io/projected/7bb096f5-9bb4-431e-ada7-1b094e1c9059-kube-api-access-5fs8w\") pod \"kube-proxy-6gm7b\" (UID: \"7bb096f5-9bb4-431e-ada7-1b094e1c9059\") " pod="kube-system/kube-proxy-6gm7b" May 27 03:23:59.752060 kubelet[2917]: I0527 03:23:59.751805 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7bb096f5-9bb4-431e-ada7-1b094e1c9059-kube-proxy\") pod \"kube-proxy-6gm7b\" (UID: \"7bb096f5-9bb4-431e-ada7-1b094e1c9059\") " pod="kube-system/kube-proxy-6gm7b" May 27 03:23:59.752060 kubelet[2917]: I0527 03:23:59.751825 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bb096f5-9bb4-431e-ada7-1b094e1c9059-lib-modules\") pod \"kube-proxy-6gm7b\" (UID: \"7bb096f5-9bb4-431e-ada7-1b094e1c9059\") " pod="kube-system/kube-proxy-6gm7b" May 27 03:23:59.860576 kubelet[2917]: E0527 03:23:59.860322 2917 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 27 03:23:59.860576 kubelet[2917]: E0527 03:23:59.860356 2917 projected.go:194] Error preparing data for projected volume kube-api-access-5fs8w for pod kube-system/kube-proxy-6gm7b: configmap "kube-root-ca.crt" not found May 27 03:23:59.860576 kubelet[2917]: E0527 03:23:59.860423 2917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7bb096f5-9bb4-431e-ada7-1b094e1c9059-kube-api-access-5fs8w podName:7bb096f5-9bb4-431e-ada7-1b094e1c9059 nodeName:}" failed. No retries permitted until 2025-05-27 03:24:00.360403859 +0000 UTC m=+6.472204497 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5fs8w" (UniqueName: "kubernetes.io/projected/7bb096f5-9bb4-431e-ada7-1b094e1c9059-kube-api-access-5fs8w") pod "kube-proxy-6gm7b" (UID: "7bb096f5-9bb4-431e-ada7-1b094e1c9059") : configmap "kube-root-ca.crt" not found May 27 03:24:00.647871 systemd[1]: Created slice kubepods-besteffort-pod8d7d7364_4072_4c36_af98_96dbd5586af7.slice - libcontainer container kubepods-besteffort-pod8d7d7364_4072_4c36_af98_96dbd5586af7.slice. May 27 03:24:00.657954 kubelet[2917]: I0527 03:24:00.657853 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8d7d7364-4072-4c36-af98-96dbd5586af7-var-lib-calico\") pod \"tigera-operator-7c5755cdcb-vg4z5\" (UID: \"8d7d7364-4072-4c36-af98-96dbd5586af7\") " pod="tigera-operator/tigera-operator-7c5755cdcb-vg4z5" May 27 03:24:00.657954 kubelet[2917]: I0527 03:24:00.657903 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2vv6\" (UniqueName: \"kubernetes.io/projected/8d7d7364-4072-4c36-af98-96dbd5586af7-kube-api-access-j2vv6\") pod \"tigera-operator-7c5755cdcb-vg4z5\" (UID: \"8d7d7364-4072-4c36-af98-96dbd5586af7\") " pod="tigera-operator/tigera-operator-7c5755cdcb-vg4z5" May 27 03:24:00.663508 containerd[1560]: time="2025-05-27T03:24:00.663453848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6gm7b,Uid:7bb096f5-9bb4-431e-ada7-1b094e1c9059,Namespace:kube-system,Attempt:0,}" May 27 03:24:00.694448 containerd[1560]: time="2025-05-27T03:24:00.694371678Z" level=info msg="connecting to shim 4f21fd2da6a9d777689c6f7e502f86fc79cca9903513342446aa466a7efdb340" address="unix:///run/containerd/s/017218d32553dbfc503682e1dbf603dc8825807b3c1e8f680721c3ca5be5dded" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:00.727497 systemd[1]: Started cri-containerd-4f21fd2da6a9d777689c6f7e502f86fc79cca9903513342446aa466a7efdb340.scope - libcontainer container 4f21fd2da6a9d777689c6f7e502f86fc79cca9903513342446aa466a7efdb340. May 27 03:24:00.752023 containerd[1560]: time="2025-05-27T03:24:00.751979754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6gm7b,Uid:7bb096f5-9bb4-431e-ada7-1b094e1c9059,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f21fd2da6a9d777689c6f7e502f86fc79cca9903513342446aa466a7efdb340\"" May 27 03:24:00.755159 containerd[1560]: time="2025-05-27T03:24:00.755116208Z" level=info msg="CreateContainer within sandbox \"4f21fd2da6a9d777689c6f7e502f86fc79cca9903513342446aa466a7efdb340\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 03:24:00.771879 containerd[1560]: time="2025-05-27T03:24:00.771489343Z" level=info msg="Container 0004022cf3d44dacc60455f3dc242f01fe161bad62b14f055091f72bdc46342d: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:00.778802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount22404330.mount: Deactivated successfully. May 27 03:24:00.785271 containerd[1560]: time="2025-05-27T03:24:00.785228432Z" level=info msg="CreateContainer within sandbox \"4f21fd2da6a9d777689c6f7e502f86fc79cca9903513342446aa466a7efdb340\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0004022cf3d44dacc60455f3dc242f01fe161bad62b14f055091f72bdc46342d\"" May 27 03:24:00.786440 containerd[1560]: time="2025-05-27T03:24:00.786426653Z" level=info msg="StartContainer for \"0004022cf3d44dacc60455f3dc242f01fe161bad62b14f055091f72bdc46342d\"" May 27 03:24:00.793481 containerd[1560]: time="2025-05-27T03:24:00.793431361Z" level=info msg="connecting to shim 0004022cf3d44dacc60455f3dc242f01fe161bad62b14f055091f72bdc46342d" address="unix:///run/containerd/s/017218d32553dbfc503682e1dbf603dc8825807b3c1e8f680721c3ca5be5dded" protocol=ttrpc version=3 May 27 03:24:00.814742 systemd[1]: Started cri-containerd-0004022cf3d44dacc60455f3dc242f01fe161bad62b14f055091f72bdc46342d.scope - libcontainer container 0004022cf3d44dacc60455f3dc242f01fe161bad62b14f055091f72bdc46342d. May 27 03:24:00.858224 containerd[1560]: time="2025-05-27T03:24:00.858177914Z" level=info msg="StartContainer for \"0004022cf3d44dacc60455f3dc242f01fe161bad62b14f055091f72bdc46342d\" returns successfully" May 27 03:24:00.952778 containerd[1560]: time="2025-05-27T03:24:00.952650229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-vg4z5,Uid:8d7d7364-4072-4c36-af98-96dbd5586af7,Namespace:tigera-operator,Attempt:0,}" May 27 03:24:00.976054 containerd[1560]: time="2025-05-27T03:24:00.975952600Z" level=info msg="connecting to shim 8de8d631426da3d149cb076c0e0725a41c3018fcd8af9972f84377638af2c79e" address="unix:///run/containerd/s/74e43b4eccaf0a15d34e89e42ffd69f65ad8d465a8e94cec3172182594799752" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:01.005746 systemd[1]: Started cri-containerd-8de8d631426da3d149cb076c0e0725a41c3018fcd8af9972f84377638af2c79e.scope - libcontainer container 8de8d631426da3d149cb076c0e0725a41c3018fcd8af9972f84377638af2c79e. May 27 03:24:01.067018 kubelet[2917]: I0527 03:24:01.066943 2917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6gm7b" podStartSLOduration=2.066923928 podStartE2EDuration="2.066923928s" podCreationTimestamp="2025-05-27 03:23:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:24:01.066263052 +0000 UTC m=+7.178063700" watchObservedRunningTime="2025-05-27 03:24:01.066923928 +0000 UTC m=+7.178724596" May 27 03:24:01.068330 containerd[1560]: time="2025-05-27T03:24:01.067802521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-vg4z5,Uid:8d7d7364-4072-4c36-af98-96dbd5586af7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8de8d631426da3d149cb076c0e0725a41c3018fcd8af9972f84377638af2c79e\"" May 27 03:24:01.070833 containerd[1560]: time="2025-05-27T03:24:01.070787132Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 27 03:24:04.054221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount215886293.mount: Deactivated successfully. May 27 03:24:04.587859 containerd[1560]: time="2025-05-27T03:24:04.587782399Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:04.589112 containerd[1560]: time="2025-05-27T03:24:04.589078924Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 27 03:24:04.590327 containerd[1560]: time="2025-05-27T03:24:04.590269682Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:04.592881 containerd[1560]: time="2025-05-27T03:24:04.592833869Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:04.593600 containerd[1560]: time="2025-05-27T03:24:04.593566791Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 3.522735966s" May 27 03:24:04.593651 containerd[1560]: time="2025-05-27T03:24:04.593606745Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 27 03:24:04.596279 containerd[1560]: time="2025-05-27T03:24:04.595986627Z" level=info msg="CreateContainer within sandbox \"8de8d631426da3d149cb076c0e0725a41c3018fcd8af9972f84377638af2c79e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 27 03:24:04.603342 containerd[1560]: time="2025-05-27T03:24:04.603116557Z" level=info msg="Container 03409ea003ada58d3a703b36bc2fc64ad8aa306c2b26978c26c58a234a807518: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:04.617225 containerd[1560]: time="2025-05-27T03:24:04.617185178Z" level=info msg="CreateContainer within sandbox \"8de8d631426da3d149cb076c0e0725a41c3018fcd8af9972f84377638af2c79e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"03409ea003ada58d3a703b36bc2fc64ad8aa306c2b26978c26c58a234a807518\"" May 27 03:24:04.618406 containerd[1560]: time="2025-05-27T03:24:04.618153188Z" level=info msg="StartContainer for \"03409ea003ada58d3a703b36bc2fc64ad8aa306c2b26978c26c58a234a807518\"" May 27 03:24:04.619903 containerd[1560]: time="2025-05-27T03:24:04.619278174Z" level=info msg="connecting to shim 03409ea003ada58d3a703b36bc2fc64ad8aa306c2b26978c26c58a234a807518" address="unix:///run/containerd/s/74e43b4eccaf0a15d34e89e42ffd69f65ad8d465a8e94cec3172182594799752" protocol=ttrpc version=3 May 27 03:24:04.651593 systemd[1]: Started cri-containerd-03409ea003ada58d3a703b36bc2fc64ad8aa306c2b26978c26c58a234a807518.scope - libcontainer container 03409ea003ada58d3a703b36bc2fc64ad8aa306c2b26978c26c58a234a807518. May 27 03:24:04.680174 containerd[1560]: time="2025-05-27T03:24:04.680133557Z" level=info msg="StartContainer for \"03409ea003ada58d3a703b36bc2fc64ad8aa306c2b26978c26c58a234a807518\" returns successfully" May 27 03:24:07.455610 systemd[1]: cri-containerd-03409ea003ada58d3a703b36bc2fc64ad8aa306c2b26978c26c58a234a807518.scope: Deactivated successfully. May 27 03:24:07.495374 containerd[1560]: time="2025-05-27T03:24:07.495293574Z" level=info msg="TaskExit event in podsandbox handler container_id:\"03409ea003ada58d3a703b36bc2fc64ad8aa306c2b26978c26c58a234a807518\" id:\"03409ea003ada58d3a703b36bc2fc64ad8aa306c2b26978c26c58a234a807518\" pid:3235 exit_status:1 exited_at:{seconds:1748316247 nanos:460854678}" May 27 03:24:07.495374 containerd[1560]: time="2025-05-27T03:24:07.495425431Z" level=info msg="received exit event container_id:\"03409ea003ada58d3a703b36bc2fc64ad8aa306c2b26978c26c58a234a807518\" id:\"03409ea003ada58d3a703b36bc2fc64ad8aa306c2b26978c26c58a234a807518\" pid:3235 exit_status:1 exited_at:{seconds:1748316247 nanos:460854678}" May 27 03:24:07.525859 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03409ea003ada58d3a703b36bc2fc64ad8aa306c2b26978c26c58a234a807518-rootfs.mount: Deactivated successfully. May 27 03:24:08.078853 kubelet[2917]: I0527 03:24:08.078806 2917 scope.go:117] "RemoveContainer" containerID="03409ea003ada58d3a703b36bc2fc64ad8aa306c2b26978c26c58a234a807518" May 27 03:24:08.084818 containerd[1560]: time="2025-05-27T03:24:08.084729418Z" level=info msg="CreateContainer within sandbox \"8de8d631426da3d149cb076c0e0725a41c3018fcd8af9972f84377638af2c79e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" May 27 03:24:08.101991 containerd[1560]: time="2025-05-27T03:24:08.101944913Z" level=info msg="Container ee6cfaeb5878b0d01ca44d402aa7ee5bb813a128765c37e0540c88fc7087f270: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:08.122886 containerd[1560]: time="2025-05-27T03:24:08.122841569Z" level=info msg="CreateContainer within sandbox \"8de8d631426da3d149cb076c0e0725a41c3018fcd8af9972f84377638af2c79e\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"ee6cfaeb5878b0d01ca44d402aa7ee5bb813a128765c37e0540c88fc7087f270\"" May 27 03:24:08.125810 containerd[1560]: time="2025-05-27T03:24:08.125602547Z" level=info msg="StartContainer for \"ee6cfaeb5878b0d01ca44d402aa7ee5bb813a128765c37e0540c88fc7087f270\"" May 27 03:24:08.129351 containerd[1560]: time="2025-05-27T03:24:08.129082080Z" level=info msg="connecting to shim ee6cfaeb5878b0d01ca44d402aa7ee5bb813a128765c37e0540c88fc7087f270" address="unix:///run/containerd/s/74e43b4eccaf0a15d34e89e42ffd69f65ad8d465a8e94cec3172182594799752" protocol=ttrpc version=3 May 27 03:24:08.169464 systemd[1]: Started cri-containerd-ee6cfaeb5878b0d01ca44d402aa7ee5bb813a128765c37e0540c88fc7087f270.scope - libcontainer container ee6cfaeb5878b0d01ca44d402aa7ee5bb813a128765c37e0540c88fc7087f270. May 27 03:24:08.201987 containerd[1560]: time="2025-05-27T03:24:08.201935329Z" level=info msg="StartContainer for \"ee6cfaeb5878b0d01ca44d402aa7ee5bb813a128765c37e0540c88fc7087f270\" returns successfully" May 27 03:24:09.099156 kubelet[2917]: I0527 03:24:09.097852 2917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7c5755cdcb-vg4z5" podStartSLOduration=5.573410236 podStartE2EDuration="9.097828178s" podCreationTimestamp="2025-05-27 03:24:00 +0000 UTC" firstStartedPulling="2025-05-27 03:24:01.0701869 +0000 UTC m=+7.181987538" lastFinishedPulling="2025-05-27 03:24:04.594604832 +0000 UTC m=+10.706405480" observedRunningTime="2025-05-27 03:24:05.084531419 +0000 UTC m=+11.196332098" watchObservedRunningTime="2025-05-27 03:24:09.097828178 +0000 UTC m=+15.209628846" May 27 03:24:10.767020 sudo[1993]: pam_unix(sudo:session): session closed for user root May 27 03:24:10.925072 sshd[1992]: Connection closed by 139.178.89.65 port 45672 May 27 03:24:10.926883 sshd-session[1990]: pam_unix(sshd:session): session closed for user core May 27 03:24:10.933205 systemd[1]: sshd@7-157.180.65.55:22-139.178.89.65:45672.service: Deactivated successfully. May 27 03:24:10.937800 systemd[1]: session-7.scope: Deactivated successfully. May 27 03:24:10.938140 systemd[1]: session-7.scope: Consumed 5.193s CPU time, 155.2M memory peak. May 27 03:24:10.942852 systemd-logind[1551]: Session 7 logged out. Waiting for processes to exit. May 27 03:24:10.945300 systemd-logind[1551]: Removed session 7. May 27 03:24:14.937352 kubelet[2917]: W0527 03:24:14.936994 2917 reflector.go:561] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-4344-0-0-e-876c439243" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4344-0-0-e-876c439243' and this object May 27 03:24:14.937352 kubelet[2917]: E0527 03:24:14.937035 2917 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ci-4344-0-0-e-876c439243\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4344-0-0-e-876c439243' and this object" logger="UnhandledError" May 27 03:24:14.939376 kubelet[2917]: W0527 03:24:14.939354 2917 reflector.go:561] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4344-0-0-e-876c439243" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4344-0-0-e-876c439243' and this object May 27 03:24:14.939492 kubelet[2917]: E0527 03:24:14.939383 2917 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4344-0-0-e-876c439243\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4344-0-0-e-876c439243' and this object" logger="UnhandledError" May 27 03:24:14.939492 kubelet[2917]: W0527 03:24:14.939421 2917 reflector.go:561] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-4344-0-0-e-876c439243" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4344-0-0-e-876c439243' and this object May 27 03:24:14.939492 kubelet[2917]: E0527 03:24:14.939430 2917 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:ci-4344-0-0-e-876c439243\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4344-0-0-e-876c439243' and this object" logger="UnhandledError" May 27 03:24:14.944121 systemd[1]: Created slice kubepods-besteffort-pod6df3c307_ec5e_4113_aced_784ad0ac5721.slice - libcontainer container kubepods-besteffort-pod6df3c307_ec5e_4113_aced_784ad0ac5721.slice. May 27 03:24:15.047582 kubelet[2917]: I0527 03:24:15.047404 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfhfk\" (UniqueName: \"kubernetes.io/projected/6df3c307-ec5e-4113-aced-784ad0ac5721-kube-api-access-mfhfk\") pod \"calico-typha-79bbbcfd88-xz89b\" (UID: \"6df3c307-ec5e-4113-aced-784ad0ac5721\") " pod="calico-system/calico-typha-79bbbcfd88-xz89b" May 27 03:24:15.048089 kubelet[2917]: I0527 03:24:15.047704 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6df3c307-ec5e-4113-aced-784ad0ac5721-tigera-ca-bundle\") pod \"calico-typha-79bbbcfd88-xz89b\" (UID: \"6df3c307-ec5e-4113-aced-784ad0ac5721\") " pod="calico-system/calico-typha-79bbbcfd88-xz89b" May 27 03:24:15.048089 kubelet[2917]: I0527 03:24:15.047891 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6df3c307-ec5e-4113-aced-784ad0ac5721-typha-certs\") pod \"calico-typha-79bbbcfd88-xz89b\" (UID: \"6df3c307-ec5e-4113-aced-784ad0ac5721\") " pod="calico-system/calico-typha-79bbbcfd88-xz89b" May 27 03:24:15.338986 systemd[1]: Created slice kubepods-besteffort-pod818f3dc2_0a6e_425e_a689_3336f7bcaa4c.slice - libcontainer container kubepods-besteffort-pod818f3dc2_0a6e_425e_a689_3336f7bcaa4c.slice. May 27 03:24:15.350149 kubelet[2917]: I0527 03:24:15.350118 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/818f3dc2-0a6e-425e-a689-3336f7bcaa4c-node-certs\") pod \"calico-node-8n9q7\" (UID: \"818f3dc2-0a6e-425e-a689-3336f7bcaa4c\") " pod="calico-system/calico-node-8n9q7" May 27 03:24:15.350849 kubelet[2917]: I0527 03:24:15.350347 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/818f3dc2-0a6e-425e-a689-3336f7bcaa4c-policysync\") pod \"calico-node-8n9q7\" (UID: \"818f3dc2-0a6e-425e-a689-3336f7bcaa4c\") " pod="calico-system/calico-node-8n9q7" May 27 03:24:15.350849 kubelet[2917]: I0527 03:24:15.350381 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/818f3dc2-0a6e-425e-a689-3336f7bcaa4c-var-lib-calico\") pod \"calico-node-8n9q7\" (UID: \"818f3dc2-0a6e-425e-a689-3336f7bcaa4c\") " pod="calico-system/calico-node-8n9q7" May 27 03:24:15.350849 kubelet[2917]: I0527 03:24:15.350400 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/818f3dc2-0a6e-425e-a689-3336f7bcaa4c-cni-bin-dir\") pod \"calico-node-8n9q7\" (UID: \"818f3dc2-0a6e-425e-a689-3336f7bcaa4c\") " pod="calico-system/calico-node-8n9q7" May 27 03:24:15.350849 kubelet[2917]: I0527 03:24:15.350418 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/818f3dc2-0a6e-425e-a689-3336f7bcaa4c-cni-net-dir\") pod \"calico-node-8n9q7\" (UID: \"818f3dc2-0a6e-425e-a689-3336f7bcaa4c\") " pod="calico-system/calico-node-8n9q7" May 27 03:24:15.350849 kubelet[2917]: I0527 03:24:15.350473 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/818f3dc2-0a6e-425e-a689-3336f7bcaa4c-tigera-ca-bundle\") pod \"calico-node-8n9q7\" (UID: \"818f3dc2-0a6e-425e-a689-3336f7bcaa4c\") " pod="calico-system/calico-node-8n9q7" May 27 03:24:15.350986 kubelet[2917]: I0527 03:24:15.350501 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/818f3dc2-0a6e-425e-a689-3336f7bcaa4c-cni-log-dir\") pod \"calico-node-8n9q7\" (UID: \"818f3dc2-0a6e-425e-a689-3336f7bcaa4c\") " pod="calico-system/calico-node-8n9q7" May 27 03:24:15.350986 kubelet[2917]: I0527 03:24:15.350519 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/818f3dc2-0a6e-425e-a689-3336f7bcaa4c-var-run-calico\") pod \"calico-node-8n9q7\" (UID: \"818f3dc2-0a6e-425e-a689-3336f7bcaa4c\") " pod="calico-system/calico-node-8n9q7" May 27 03:24:15.350986 kubelet[2917]: I0527 03:24:15.350540 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/818f3dc2-0a6e-425e-a689-3336f7bcaa4c-lib-modules\") pod \"calico-node-8n9q7\" (UID: \"818f3dc2-0a6e-425e-a689-3336f7bcaa4c\") " pod="calico-system/calico-node-8n9q7" May 27 03:24:15.350986 kubelet[2917]: I0527 03:24:15.350553 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/818f3dc2-0a6e-425e-a689-3336f7bcaa4c-xtables-lock\") pod \"calico-node-8n9q7\" (UID: \"818f3dc2-0a6e-425e-a689-3336f7bcaa4c\") " pod="calico-system/calico-node-8n9q7" May 27 03:24:15.350986 kubelet[2917]: I0527 03:24:15.350573 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/818f3dc2-0a6e-425e-a689-3336f7bcaa4c-flexvol-driver-host\") pod \"calico-node-8n9q7\" (UID: \"818f3dc2-0a6e-425e-a689-3336f7bcaa4c\") " pod="calico-system/calico-node-8n9q7" May 27 03:24:15.351090 kubelet[2917]: I0527 03:24:15.350585 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ct46\" (UniqueName: \"kubernetes.io/projected/818f3dc2-0a6e-425e-a689-3336f7bcaa4c-kube-api-access-7ct46\") pod \"calico-node-8n9q7\" (UID: \"818f3dc2-0a6e-425e-a689-3336f7bcaa4c\") " pod="calico-system/calico-node-8n9q7" May 27 03:24:15.469505 kubelet[2917]: E0527 03:24:15.469298 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.469505 kubelet[2917]: W0527 03:24:15.469371 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.469505 kubelet[2917]: E0527 03:24:15.469405 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.553346 kubelet[2917]: E0527 03:24:15.553254 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.553756 kubelet[2917]: W0527 03:24:15.553294 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.553756 kubelet[2917]: E0527 03:24:15.553605 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.554352 kubelet[2917]: E0527 03:24:15.554022 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.554352 kubelet[2917]: W0527 03:24:15.554045 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.554352 kubelet[2917]: E0527 03:24:15.554068 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.555607 kubelet[2917]: E0527 03:24:15.555414 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.555607 kubelet[2917]: W0527 03:24:15.555442 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.555607 kubelet[2917]: E0527 03:24:15.555492 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.556207 kubelet[2917]: E0527 03:24:15.556064 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.556207 kubelet[2917]: W0527 03:24:15.556089 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.556207 kubelet[2917]: E0527 03:24:15.556113 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.556971 kubelet[2917]: E0527 03:24:15.556893 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.556971 kubelet[2917]: W0527 03:24:15.556914 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.556971 kubelet[2917]: E0527 03:24:15.556931 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.579434 kubelet[2917]: E0527 03:24:15.578899 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2khsc" podUID="16daa161-3275-4d49-9e4c-ba4748828624" May 27 03:24:15.650685 kubelet[2917]: E0527 03:24:15.650576 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.650685 kubelet[2917]: W0527 03:24:15.650609 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.650685 kubelet[2917]: E0527 03:24:15.650634 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.651077 kubelet[2917]: E0527 03:24:15.651033 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.651077 kubelet[2917]: W0527 03:24:15.651050 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.651077 kubelet[2917]: E0527 03:24:15.651064 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.651539 kubelet[2917]: E0527 03:24:15.651516 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.651539 kubelet[2917]: W0527 03:24:15.651537 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.651539 kubelet[2917]: E0527 03:24:15.651551 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.652570 kubelet[2917]: E0527 03:24:15.652526 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.652570 kubelet[2917]: W0527 03:24:15.652549 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.652570 kubelet[2917]: E0527 03:24:15.652564 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.652764 kubelet[2917]: E0527 03:24:15.652746 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.652764 kubelet[2917]: W0527 03:24:15.652762 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.652764 kubelet[2917]: E0527 03:24:15.652775 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.652934 kubelet[2917]: E0527 03:24:15.652916 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.652934 kubelet[2917]: W0527 03:24:15.652932 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.653001 kubelet[2917]: E0527 03:24:15.652942 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.653093 kubelet[2917]: E0527 03:24:15.653076 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.653161 kubelet[2917]: W0527 03:24:15.653093 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.653161 kubelet[2917]: E0527 03:24:15.653102 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.653366 kubelet[2917]: E0527 03:24:15.653237 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.653366 kubelet[2917]: W0527 03:24:15.653245 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.653366 kubelet[2917]: E0527 03:24:15.653254 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.653561 kubelet[2917]: E0527 03:24:15.653467 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.653561 kubelet[2917]: W0527 03:24:15.653480 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.653561 kubelet[2917]: E0527 03:24:15.653490 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.653802 kubelet[2917]: E0527 03:24:15.653781 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.653802 kubelet[2917]: W0527 03:24:15.653800 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.653986 kubelet[2917]: E0527 03:24:15.653811 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.654790 kubelet[2917]: E0527 03:24:15.654613 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.654790 kubelet[2917]: W0527 03:24:15.654628 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.654790 kubelet[2917]: E0527 03:24:15.654641 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.655233 kubelet[2917]: E0527 03:24:15.655145 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.655233 kubelet[2917]: W0527 03:24:15.655158 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.655233 kubelet[2917]: E0527 03:24:15.655172 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.655764 kubelet[2917]: E0527 03:24:15.655690 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.655764 kubelet[2917]: W0527 03:24:15.655703 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.655764 kubelet[2917]: E0527 03:24:15.655715 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.656200 kubelet[2917]: E0527 03:24:15.656150 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.656339 kubelet[2917]: W0527 03:24:15.656291 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.657132 kubelet[2917]: E0527 03:24:15.657080 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.657413 kubelet[2917]: E0527 03:24:15.657350 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.657413 kubelet[2917]: W0527 03:24:15.657363 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.657413 kubelet[2917]: E0527 03:24:15.657375 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.657735 kubelet[2917]: E0527 03:24:15.657675 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.657735 kubelet[2917]: W0527 03:24:15.657700 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.657735 kubelet[2917]: E0527 03:24:15.657712 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.658133 kubelet[2917]: E0527 03:24:15.658073 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.658133 kubelet[2917]: W0527 03:24:15.658086 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.658133 kubelet[2917]: E0527 03:24:15.658097 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.658538 kubelet[2917]: E0527 03:24:15.658413 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.658538 kubelet[2917]: W0527 03:24:15.658442 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.658538 kubelet[2917]: E0527 03:24:15.658471 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.659005 kubelet[2917]: E0527 03:24:15.658979 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.659181 kubelet[2917]: W0527 03:24:15.659057 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.659181 kubelet[2917]: E0527 03:24:15.659086 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.659672 kubelet[2917]: E0527 03:24:15.659611 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.659672 kubelet[2917]: W0527 03:24:15.659641 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.659672 kubelet[2917]: E0527 03:24:15.659654 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.660292 kubelet[2917]: E0527 03:24:15.660268 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.660563 kubelet[2917]: W0527 03:24:15.660464 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.660563 kubelet[2917]: E0527 03:24:15.660484 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.660931 kubelet[2917]: E0527 03:24:15.660871 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.661166 kubelet[2917]: W0527 03:24:15.660893 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.661166 kubelet[2917]: E0527 03:24:15.661113 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.661552 kubelet[2917]: E0527 03:24:15.661510 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.661552 kubelet[2917]: W0527 03:24:15.661523 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.661552 kubelet[2917]: E0527 03:24:15.661534 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.661876 kubelet[2917]: I0527 03:24:15.661816 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/16daa161-3275-4d49-9e4c-ba4748828624-varrun\") pod \"csi-node-driver-2khsc\" (UID: \"16daa161-3275-4d49-9e4c-ba4748828624\") " pod="calico-system/csi-node-driver-2khsc" May 27 03:24:15.662265 kubelet[2917]: E0527 03:24:15.662228 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.662265 kubelet[2917]: W0527 03:24:15.662246 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.662678 kubelet[2917]: E0527 03:24:15.662377 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.662961 kubelet[2917]: E0527 03:24:15.662927 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.663193 kubelet[2917]: W0527 03:24:15.663060 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.663193 kubelet[2917]: E0527 03:24:15.663128 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.663193 kubelet[2917]: I0527 03:24:15.663151 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/16daa161-3275-4d49-9e4c-ba4748828624-kubelet-dir\") pod \"csi-node-driver-2khsc\" (UID: \"16daa161-3275-4d49-9e4c-ba4748828624\") " pod="calico-system/csi-node-driver-2khsc" May 27 03:24:15.663712 kubelet[2917]: E0527 03:24:15.663649 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.663712 kubelet[2917]: W0527 03:24:15.663663 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.664335 kubelet[2917]: E0527 03:24:15.663827 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.664605 kubelet[2917]: E0527 03:24:15.664575 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.664605 kubelet[2917]: W0527 03:24:15.664590 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.664765 kubelet[2917]: E0527 03:24:15.664708 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.664995 kubelet[2917]: E0527 03:24:15.664969 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.664995 kubelet[2917]: W0527 03:24:15.664981 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.665215 kubelet[2917]: E0527 03:24:15.665127 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.665469 kubelet[2917]: E0527 03:24:15.665427 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.665469 kubelet[2917]: W0527 03:24:15.665440 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.665845 kubelet[2917]: E0527 03:24:15.665761 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.665845 kubelet[2917]: I0527 03:24:15.665793 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/16daa161-3275-4d49-9e4c-ba4748828624-socket-dir\") pod \"csi-node-driver-2khsc\" (UID: \"16daa161-3275-4d49-9e4c-ba4748828624\") " pod="calico-system/csi-node-driver-2khsc" May 27 03:24:15.666175 kubelet[2917]: E0527 03:24:15.666047 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.666175 kubelet[2917]: W0527 03:24:15.666058 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.666175 kubelet[2917]: E0527 03:24:15.666074 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.666498 kubelet[2917]: E0527 03:24:15.666422 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.666745 kubelet[2917]: W0527 03:24:15.666579 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.666745 kubelet[2917]: E0527 03:24:15.666614 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.667056 kubelet[2917]: E0527 03:24:15.666992 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.667056 kubelet[2917]: W0527 03:24:15.667005 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.667369 kubelet[2917]: E0527 03:24:15.667173 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.667369 kubelet[2917]: I0527 03:24:15.667200 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/16daa161-3275-4d49-9e4c-ba4748828624-registration-dir\") pod \"csi-node-driver-2khsc\" (UID: \"16daa161-3275-4d49-9e4c-ba4748828624\") " pod="calico-system/csi-node-driver-2khsc" May 27 03:24:15.667689 kubelet[2917]: E0527 03:24:15.667637 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.667918 kubelet[2917]: W0527 03:24:15.667771 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.667918 kubelet[2917]: E0527 03:24:15.667858 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.668316 kubelet[2917]: E0527 03:24:15.668273 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.668514 kubelet[2917]: W0527 03:24:15.668372 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.668693 kubelet[2917]: E0527 03:24:15.668573 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.668866 kubelet[2917]: E0527 03:24:15.668858 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.668912 kubelet[2917]: W0527 03:24:15.668905 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.669093 kubelet[2917]: E0527 03:24:15.669037 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.669093 kubelet[2917]: I0527 03:24:15.669056 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbvfp\" (UniqueName: \"kubernetes.io/projected/16daa161-3275-4d49-9e4c-ba4748828624-kube-api-access-pbvfp\") pod \"csi-node-driver-2khsc\" (UID: \"16daa161-3275-4d49-9e4c-ba4748828624\") " pod="calico-system/csi-node-driver-2khsc" May 27 03:24:15.669406 kubelet[2917]: E0527 03:24:15.669397 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.669580 kubelet[2917]: W0527 03:24:15.669559 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.669659 kubelet[2917]: E0527 03:24:15.669646 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.669917 kubelet[2917]: E0527 03:24:15.669898 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.669917 kubelet[2917]: W0527 03:24:15.669907 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.670218 kubelet[2917]: E0527 03:24:15.670149 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.670389 kubelet[2917]: E0527 03:24:15.670294 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.670475 kubelet[2917]: W0527 03:24:15.670465 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.670555 kubelet[2917]: E0527 03:24:15.670525 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.670830 kubelet[2917]: E0527 03:24:15.670804 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.670830 kubelet[2917]: W0527 03:24:15.670812 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.670830 kubelet[2917]: E0527 03:24:15.670820 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.671208 kubelet[2917]: E0527 03:24:15.671178 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.671208 kubelet[2917]: W0527 03:24:15.671187 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.671208 kubelet[2917]: E0527 03:24:15.671195 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.770371 kubelet[2917]: E0527 03:24:15.770297 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.770371 kubelet[2917]: W0527 03:24:15.770364 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.770650 kubelet[2917]: E0527 03:24:15.770393 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.770699 kubelet[2917]: E0527 03:24:15.770661 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.770752 kubelet[2917]: W0527 03:24:15.770724 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.770790 kubelet[2917]: E0527 03:24:15.770764 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.771084 kubelet[2917]: E0527 03:24:15.771043 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.771084 kubelet[2917]: W0527 03:24:15.771067 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.771174 kubelet[2917]: E0527 03:24:15.771118 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.771505 kubelet[2917]: E0527 03:24:15.771480 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.771505 kubelet[2917]: W0527 03:24:15.771500 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.771638 kubelet[2917]: E0527 03:24:15.771523 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.771787 kubelet[2917]: E0527 03:24:15.771743 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.771787 kubelet[2917]: W0527 03:24:15.771785 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.771880 kubelet[2917]: E0527 03:24:15.771800 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.772015 kubelet[2917]: E0527 03:24:15.771989 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.772072 kubelet[2917]: W0527 03:24:15.772012 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.772072 kubelet[2917]: E0527 03:24:15.772060 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.772417 kubelet[2917]: E0527 03:24:15.772389 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.772417 kubelet[2917]: W0527 03:24:15.772408 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.772549 kubelet[2917]: E0527 03:24:15.772430 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.772848 kubelet[2917]: E0527 03:24:15.772791 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.772848 kubelet[2917]: W0527 03:24:15.772842 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.773215 kubelet[2917]: E0527 03:24:15.773052 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.773292 kubelet[2917]: E0527 03:24:15.773222 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.773292 kubelet[2917]: W0527 03:24:15.773244 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.773409 kubelet[2917]: E0527 03:24:15.773368 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.773762 kubelet[2917]: E0527 03:24:15.773729 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.773762 kubelet[2917]: W0527 03:24:15.773753 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.773894 kubelet[2917]: E0527 03:24:15.773865 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.774159 kubelet[2917]: E0527 03:24:15.774127 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.774221 kubelet[2917]: W0527 03:24:15.774206 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.774895 kubelet[2917]: E0527 03:24:15.774857 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.775091 kubelet[2917]: E0527 03:24:15.775076 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.775274 kubelet[2917]: W0527 03:24:15.775200 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.775274 kubelet[2917]: E0527 03:24:15.775228 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.775771 kubelet[2917]: E0527 03:24:15.775754 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.775936 kubelet[2917]: W0527 03:24:15.775848 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.776197 kubelet[2917]: E0527 03:24:15.775996 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.776197 kubelet[2917]: E0527 03:24:15.776110 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.776197 kubelet[2917]: W0527 03:24:15.776129 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.776592 kubelet[2917]: E0527 03:24:15.776257 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.776592 kubelet[2917]: E0527 03:24:15.776519 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.776592 kubelet[2917]: W0527 03:24:15.776533 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.776754 kubelet[2917]: E0527 03:24:15.776613 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.777240 kubelet[2917]: E0527 03:24:15.776906 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.777240 kubelet[2917]: W0527 03:24:15.776926 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.777240 kubelet[2917]: E0527 03:24:15.777068 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.777516 kubelet[2917]: E0527 03:24:15.777339 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.777516 kubelet[2917]: W0527 03:24:15.777353 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.777516 kubelet[2917]: E0527 03:24:15.777400 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.777833 kubelet[2917]: E0527 03:24:15.777722 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.777833 kubelet[2917]: W0527 03:24:15.777740 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.777940 kubelet[2917]: E0527 03:24:15.777863 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.778288 kubelet[2917]: E0527 03:24:15.778087 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.778288 kubelet[2917]: W0527 03:24:15.778103 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.778288 kubelet[2917]: E0527 03:24:15.778240 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.778945 kubelet[2917]: E0527 03:24:15.778915 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.779101 kubelet[2917]: W0527 03:24:15.778946 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.779178 kubelet[2917]: E0527 03:24:15.779117 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.779571 kubelet[2917]: E0527 03:24:15.779523 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.779691 kubelet[2917]: W0527 03:24:15.779588 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.779740 kubelet[2917]: E0527 03:24:15.779684 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.780081 kubelet[2917]: E0527 03:24:15.780037 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.780081 kubelet[2917]: W0527 03:24:15.780076 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.780247 kubelet[2917]: E0527 03:24:15.780161 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.780623 kubelet[2917]: E0527 03:24:15.780491 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.780623 kubelet[2917]: W0527 03:24:15.780508 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.780744 kubelet[2917]: E0527 03:24:15.780647 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.781035 kubelet[2917]: E0527 03:24:15.780970 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.781035 kubelet[2917]: W0527 03:24:15.780991 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.781202 kubelet[2917]: E0527 03:24:15.781132 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.781552 kubelet[2917]: E0527 03:24:15.781523 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.781639 kubelet[2917]: W0527 03:24:15.781563 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.781711 kubelet[2917]: E0527 03:24:15.781679 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.781930 kubelet[2917]: E0527 03:24:15.781897 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.781930 kubelet[2917]: W0527 03:24:15.781925 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.782090 kubelet[2917]: E0527 03:24:15.782018 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.782484 kubelet[2917]: E0527 03:24:15.782418 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.782484 kubelet[2917]: W0527 03:24:15.782438 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.782484 kubelet[2917]: E0527 03:24:15.782518 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.783296 kubelet[2917]: E0527 03:24:15.783225 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.783296 kubelet[2917]: W0527 03:24:15.783247 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.783296 kubelet[2917]: E0527 03:24:15.783279 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.784183 kubelet[2917]: E0527 03:24:15.783934 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.784183 kubelet[2917]: W0527 03:24:15.783951 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.784183 kubelet[2917]: E0527 03:24:15.783964 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.784645 kubelet[2917]: E0527 03:24:15.784518 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.784759 kubelet[2917]: W0527 03:24:15.784738 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.784983 kubelet[2917]: E0527 03:24:15.784962 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.840948 kubelet[2917]: E0527 03:24:15.840912 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.841159 kubelet[2917]: W0527 03:24:15.841115 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.841159 kubelet[2917]: E0527 03:24:15.841150 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.842717 kubelet[2917]: E0527 03:24:15.842685 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.842717 kubelet[2917]: W0527 03:24:15.842715 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.842717 kubelet[2917]: E0527 03:24:15.842734 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.881361 kubelet[2917]: E0527 03:24:15.881270 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.881932 kubelet[2917]: W0527 03:24:15.881474 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.881932 kubelet[2917]: E0527 03:24:15.881506 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.882576 kubelet[2917]: E0527 03:24:15.882557 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.882776 kubelet[2917]: W0527 03:24:15.882640 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.882776 kubelet[2917]: E0527 03:24:15.882660 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.883274 kubelet[2917]: E0527 03:24:15.883235 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.883274 kubelet[2917]: W0527 03:24:15.883264 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.883426 kubelet[2917]: E0527 03:24:15.883284 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.883643 kubelet[2917]: E0527 03:24:15.883623 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.883977 kubelet[2917]: W0527 03:24:15.883819 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.883977 kubelet[2917]: E0527 03:24:15.883848 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.985397 kubelet[2917]: E0527 03:24:15.985276 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.985397 kubelet[2917]: W0527 03:24:15.985346 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.985397 kubelet[2917]: E0527 03:24:15.985375 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.986095 kubelet[2917]: E0527 03:24:15.986084 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.986278 kubelet[2917]: W0527 03:24:15.986182 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.986278 kubelet[2917]: E0527 03:24:15.986200 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.986568 kubelet[2917]: E0527 03:24:15.986510 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.986568 kubelet[2917]: W0527 03:24:15.986520 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.986568 kubelet[2917]: E0527 03:24:15.986528 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:15.986764 kubelet[2917]: E0527 03:24:15.986726 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:15.986764 kubelet[2917]: W0527 03:24:15.986734 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:15.986764 kubelet[2917]: E0527 03:24:15.986742 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:16.087640 kubelet[2917]: E0527 03:24:16.087588 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:16.087640 kubelet[2917]: W0527 03:24:16.087622 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:16.087875 kubelet[2917]: E0527 03:24:16.087652 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:16.087923 kubelet[2917]: E0527 03:24:16.087872 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:16.087923 kubelet[2917]: W0527 03:24:16.087887 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:16.087923 kubelet[2917]: E0527 03:24:16.087901 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:16.088118 kubelet[2917]: E0527 03:24:16.088080 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:16.088118 kubelet[2917]: W0527 03:24:16.088097 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:16.088118 kubelet[2917]: E0527 03:24:16.088110 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:16.088361 kubelet[2917]: E0527 03:24:16.088344 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:16.088361 kubelet[2917]: W0527 03:24:16.088360 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:16.088504 kubelet[2917]: E0527 03:24:16.088374 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:16.149046 kubelet[2917]: E0527 03:24:16.148957 2917 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition May 27 03:24:16.149234 kubelet[2917]: E0527 03:24:16.149150 2917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6df3c307-ec5e-4113-aced-784ad0ac5721-typha-certs podName:6df3c307-ec5e-4113-aced-784ad0ac5721 nodeName:}" failed. No retries permitted until 2025-05-27 03:24:16.649095881 +0000 UTC m=+22.760896539 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/6df3c307-ec5e-4113-aced-784ad0ac5721-typha-certs") pod "calico-typha-79bbbcfd88-xz89b" (UID: "6df3c307-ec5e-4113-aced-784ad0ac5721") : failed to sync secret cache: timed out waiting for the condition May 27 03:24:16.156538 kubelet[2917]: E0527 03:24:16.156508 2917 projected.go:288] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 27 03:24:16.156617 kubelet[2917]: E0527 03:24:16.156543 2917 projected.go:194] Error preparing data for projected volume kube-api-access-mfhfk for pod calico-system/calico-typha-79bbbcfd88-xz89b: failed to sync configmap cache: timed out waiting for the condition May 27 03:24:16.156617 kubelet[2917]: E0527 03:24:16.156612 2917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6df3c307-ec5e-4113-aced-784ad0ac5721-kube-api-access-mfhfk podName:6df3c307-ec5e-4113-aced-784ad0ac5721 nodeName:}" failed. No retries permitted until 2025-05-27 03:24:16.656590272 +0000 UTC m=+22.768390920 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mfhfk" (UniqueName: "kubernetes.io/projected/6df3c307-ec5e-4113-aced-784ad0ac5721-kube-api-access-mfhfk") pod "calico-typha-79bbbcfd88-xz89b" (UID: "6df3c307-ec5e-4113-aced-784ad0ac5721") : failed to sync configmap cache: timed out waiting for the condition May 27 03:24:16.191293 kubelet[2917]: E0527 03:24:16.190369 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:16.191293 kubelet[2917]: W0527 03:24:16.190523 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:16.191293 kubelet[2917]: E0527 03:24:16.190556 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:16.191293 kubelet[2917]: E0527 03:24:16.191104 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:16.191293 kubelet[2917]: W0527 03:24:16.191119 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:16.191293 kubelet[2917]: E0527 03:24:16.191173 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:16.191752 kubelet[2917]: E0527 03:24:16.191713 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:16.191909 kubelet[2917]: W0527 03:24:16.191866 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:16.192015 kubelet[2917]: E0527 03:24:16.191961 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:16.192674 kubelet[2917]: E0527 03:24:16.192621 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:16.192739 kubelet[2917]: W0527 03:24:16.192643 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:16.192774 kubelet[2917]: E0527 03:24:16.192740 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:16.282084 kubelet[2917]: E0527 03:24:16.281987 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:16.282084 kubelet[2917]: W0527 03:24:16.282011 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:16.282324 kubelet[2917]: E0527 03:24:16.282221 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:16.283077 kubelet[2917]: E0527 03:24:16.282544 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:16.283160 kubelet[2917]: W0527 03:24:16.283121 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:16.283160 kubelet[2917]: E0527 03:24:16.283135 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:16.294087 kubelet[2917]: E0527 03:24:16.294010 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:16.294087 kubelet[2917]: W0527 03:24:16.294025 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:16.294087 kubelet[2917]: E0527 03:24:16.294040 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:16.294378 kubelet[2917]: E0527 03:24:16.294201 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:16.294378 kubelet[2917]: W0527 03:24:16.294209 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:16.294378 kubelet[2917]: E0527 03:24:16.294217 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:16.395354 kubelet[2917]: E0527 03:24:16.395235 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:16.395354 kubelet[2917]: W0527 03:24:16.395342 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:16.395543 kubelet[2917]: E0527 03:24:16.395363 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:16.395760 kubelet[2917]: E0527 03:24:16.395647 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:16.395760 kubelet[2917]: W0527 03:24:16.395659 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:16.395760 kubelet[2917]: E0527 03:24:16.395667 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:16.498369 kubelet[2917]: E0527 03:24:16.497695 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:16.498369 kubelet[2917]: W0527 03:24:16.497733 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:16.498369 kubelet[2917]: E0527 03:24:16.497794 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:16.498710 kubelet[2917]: E0527 03:24:16.498396 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:16.498710 kubelet[2917]: W0527 03:24:16.498431 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:16.498710 kubelet[2917]: E0527 03:24:16.498483 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:16.545542 containerd[1560]: time="2025-05-27T03:24:16.544981018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8n9q7,Uid:818f3dc2-0a6e-425e-a689-3336f7bcaa4c,Namespace:calico-system,Attempt:0,}" May 27 03:24:16.599139 kubelet[2917]: E0527 03:24:16.599072 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:16.599139 kubelet[2917]: W0527 03:24:16.599091 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:16.599139 kubelet[2917]: E0527 03:24:16.599110 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:16.599515 kubelet[2917]: E0527 03:24:16.599457 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:16.599515 kubelet[2917]: W0527 03:24:16.599480 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:16.599515 kubelet[2917]: E0527 03:24:16.599488 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:16.601572 containerd[1560]: time="2025-05-27T03:24:16.601480064Z" level=info msg="connecting to shim faf89b9b5b7318cffc662fa48acb453fb03c93ad21627a8e41c74bb81b1daca3" address="unix:///run/containerd/s/ee3179b252fb41b4edb23bc227064ad499f7bbd493fc8cb1e1ff28b64a7f5433" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:16.629502 systemd[1]: Started cri-containerd-faf89b9b5b7318cffc662fa48acb453fb03c93ad21627a8e41c74bb81b1daca3.scope - libcontainer container faf89b9b5b7318cffc662fa48acb453fb03c93ad21627a8e41c74bb81b1daca3. May 27 03:24:16.663587 containerd[1560]: time="2025-05-27T03:24:16.663536452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8n9q7,Uid:818f3dc2-0a6e-425e-a689-3336f7bcaa4c,Namespace:calico-system,Attempt:0,} returns sandbox id \"faf89b9b5b7318cffc662fa48acb453fb03c93ad21627a8e41c74bb81b1daca3\"" May 27 03:24:16.668349 containerd[1560]: time="2025-05-27T03:24:16.668321397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 27 03:24:16.700268 kubelet[2917]: E0527 03:24:16.700214 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:16.700268 kubelet[2917]: W0527 03:24:16.700250 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:16.700463 kubelet[2917]: E0527 03:24:16.700283 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:16.700674 kubelet[2917]: E0527 03:24:16.700645 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:16.700719 kubelet[2917]: W0527 03:24:16.700666 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:16.700719 kubelet[2917]: E0527 03:24:16.700710 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:16.701093 kubelet[2917]: E0527 03:24:16.700983 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:16.701093 kubelet[2917]: W0527 03:24:16.701008 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:16.701093 kubelet[2917]: E0527 03:24:16.701040 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:16.701404 kubelet[2917]: E0527 03:24:16.701296 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:16.701404 kubelet[2917]: W0527 03:24:16.701325 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:16.701404 kubelet[2917]: E0527 03:24:16.701347 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:16.701576 kubelet[2917]: E0527 03:24:16.701566 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:16.701650 kubelet[2917]: W0527 03:24:16.701614 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:16.701650 kubelet[2917]: E0527 03:24:16.701633 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:16.701944 kubelet[2917]: E0527 03:24:16.701775 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:16.701944 kubelet[2917]: W0527 03:24:16.701782 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:16.701944 kubelet[2917]: E0527 03:24:16.701791 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:16.702345 kubelet[2917]: E0527 03:24:16.702173 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:16.702345 kubelet[2917]: W0527 03:24:16.702182 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:16.702345 kubelet[2917]: E0527 03:24:16.702190 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:16.702345 kubelet[2917]: E0527 03:24:16.702344 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:16.702522 kubelet[2917]: W0527 03:24:16.702351 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:16.702522 kubelet[2917]: E0527 03:24:16.702358 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:16.702522 kubelet[2917]: E0527 03:24:16.702440 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:16.702522 kubelet[2917]: W0527 03:24:16.702458 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:16.702522 kubelet[2917]: E0527 03:24:16.702465 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:16.702686 kubelet[2917]: E0527 03:24:16.702566 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:16.702686 kubelet[2917]: W0527 03:24:16.702574 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:16.702686 kubelet[2917]: E0527 03:24:16.702580 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:16.710420 kubelet[2917]: E0527 03:24:16.710363 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:16.710420 kubelet[2917]: W0527 03:24:16.710390 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:16.710420 kubelet[2917]: E0527 03:24:16.710422 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:16.710649 kubelet[2917]: E0527 03:24:16.710586 2917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:24:16.710649 kubelet[2917]: W0527 03:24:16.710593 2917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:24:16.710649 kubelet[2917]: E0527 03:24:16.710600 2917 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:24:16.750590 containerd[1560]: time="2025-05-27T03:24:16.750465416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79bbbcfd88-xz89b,Uid:6df3c307-ec5e-4113-aced-784ad0ac5721,Namespace:calico-system,Attempt:0,}" May 27 03:24:16.780700 containerd[1560]: time="2025-05-27T03:24:16.780645025Z" level=info msg="connecting to shim ad3f0ab2b012898f691697013f7d63cd0411709abc2974212bb20843dd373b7f" address="unix:///run/containerd/s/0c01a3d4e46d674ed1c07795a2adf2dd4dc0bf837921e38463faa7f21860bf56" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:16.807490 systemd[1]: Started cri-containerd-ad3f0ab2b012898f691697013f7d63cd0411709abc2974212bb20843dd373b7f.scope - libcontainer container ad3f0ab2b012898f691697013f7d63cd0411709abc2974212bb20843dd373b7f. May 27 03:24:16.854679 containerd[1560]: time="2025-05-27T03:24:16.854609491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79bbbcfd88-xz89b,Uid:6df3c307-ec5e-4113-aced-784ad0ac5721,Namespace:calico-system,Attempt:0,} returns sandbox id \"ad3f0ab2b012898f691697013f7d63cd0411709abc2974212bb20843dd373b7f\"" May 27 03:24:17.019335 kubelet[2917]: E0527 03:24:17.019233 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2khsc" podUID="16daa161-3275-4d49-9e4c-ba4748828624" May 27 03:24:18.734572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1517961076.mount: Deactivated successfully. May 27 03:24:18.843842 containerd[1560]: time="2025-05-27T03:24:18.843729249Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:18.844843 containerd[1560]: time="2025-05-27T03:24:18.844807680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=5934460" May 27 03:24:18.845974 containerd[1560]: time="2025-05-27T03:24:18.845927729Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:18.848451 containerd[1560]: time="2025-05-27T03:24:18.848377719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:18.848818 containerd[1560]: time="2025-05-27T03:24:18.848796514Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 2.180273461s" May 27 03:24:18.848886 containerd[1560]: time="2025-05-27T03:24:18.848875763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 27 03:24:18.850152 containerd[1560]: time="2025-05-27T03:24:18.850117670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 27 03:24:18.855040 containerd[1560]: time="2025-05-27T03:24:18.855001561Z" level=info msg="CreateContainer within sandbox \"faf89b9b5b7318cffc662fa48acb453fb03c93ad21627a8e41c74bb81b1daca3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 27 03:24:18.865957 containerd[1560]: time="2025-05-27T03:24:18.865331727Z" level=info msg="Container 13aeff8eee5ce2b661fe147b1da5843a052ce94da216617f511ae5ad6b6f6374: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:18.878364 containerd[1560]: time="2025-05-27T03:24:18.878290629Z" level=info msg="CreateContainer within sandbox \"faf89b9b5b7318cffc662fa48acb453fb03c93ad21627a8e41c74bb81b1daca3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"13aeff8eee5ce2b661fe147b1da5843a052ce94da216617f511ae5ad6b6f6374\"" May 27 03:24:18.878830 containerd[1560]: time="2025-05-27T03:24:18.878807377Z" level=info msg="StartContainer for \"13aeff8eee5ce2b661fe147b1da5843a052ce94da216617f511ae5ad6b6f6374\"" May 27 03:24:18.880383 containerd[1560]: time="2025-05-27T03:24:18.880353584Z" level=info msg="connecting to shim 13aeff8eee5ce2b661fe147b1da5843a052ce94da216617f511ae5ad6b6f6374" address="unix:///run/containerd/s/ee3179b252fb41b4edb23bc227064ad499f7bbd493fc8cb1e1ff28b64a7f5433" protocol=ttrpc version=3 May 27 03:24:18.902458 systemd[1]: Started cri-containerd-13aeff8eee5ce2b661fe147b1da5843a052ce94da216617f511ae5ad6b6f6374.scope - libcontainer container 13aeff8eee5ce2b661fe147b1da5843a052ce94da216617f511ae5ad6b6f6374. May 27 03:24:18.951187 containerd[1560]: time="2025-05-27T03:24:18.950527292Z" level=info msg="StartContainer for \"13aeff8eee5ce2b661fe147b1da5843a052ce94da216617f511ae5ad6b6f6374\" returns successfully" May 27 03:24:18.961459 systemd[1]: cri-containerd-13aeff8eee5ce2b661fe147b1da5843a052ce94da216617f511ae5ad6b6f6374.scope: Deactivated successfully. May 27 03:24:18.965054 containerd[1560]: time="2025-05-27T03:24:18.965018204Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13aeff8eee5ce2b661fe147b1da5843a052ce94da216617f511ae5ad6b6f6374\" id:\"13aeff8eee5ce2b661fe147b1da5843a052ce94da216617f511ae5ad6b6f6374\" pid:3599 exited_at:{seconds:1748316258 nanos:963730832}" May 27 03:24:18.965726 containerd[1560]: time="2025-05-27T03:24:18.965678792Z" level=info msg="received exit event container_id:\"13aeff8eee5ce2b661fe147b1da5843a052ce94da216617f511ae5ad6b6f6374\" id:\"13aeff8eee5ce2b661fe147b1da5843a052ce94da216617f511ae5ad6b6f6374\" pid:3599 exited_at:{seconds:1748316258 nanos:963730832}" May 27 03:24:19.019884 kubelet[2917]: E0527 03:24:19.019597 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2khsc" podUID="16daa161-3275-4d49-9e4c-ba4748828624" May 27 03:24:19.624190 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13aeff8eee5ce2b661fe147b1da5843a052ce94da216617f511ae5ad6b6f6374-rootfs.mount: Deactivated successfully. May 27 03:24:21.021359 kubelet[2917]: E0527 03:24:21.021180 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2khsc" podUID="16daa161-3275-4d49-9e4c-ba4748828624" May 27 03:24:21.848987 containerd[1560]: time="2025-05-27T03:24:21.848933808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:21.849838 containerd[1560]: time="2025-05-27T03:24:21.849767851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=33665828" May 27 03:24:21.850571 containerd[1560]: time="2025-05-27T03:24:21.850549948Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:21.852358 containerd[1560]: time="2025-05-27T03:24:21.852100684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:21.852882 containerd[1560]: time="2025-05-27T03:24:21.852542492Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 3.002208347s" May 27 03:24:21.852882 containerd[1560]: time="2025-05-27T03:24:21.852566468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 27 03:24:21.854270 containerd[1560]: time="2025-05-27T03:24:21.854002198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 27 03:24:21.867837 containerd[1560]: time="2025-05-27T03:24:21.867796061Z" level=info msg="CreateContainer within sandbox \"ad3f0ab2b012898f691697013f7d63cd0411709abc2974212bb20843dd373b7f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 27 03:24:21.877286 containerd[1560]: time="2025-05-27T03:24:21.875410351Z" level=info msg="Container 8cf523767de7405a589c613d491df19eea82087fb5c928ec8b56fade9cd23067: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:21.884868 containerd[1560]: time="2025-05-27T03:24:21.884814236Z" level=info msg="CreateContainer within sandbox \"ad3f0ab2b012898f691697013f7d63cd0411709abc2974212bb20843dd373b7f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8cf523767de7405a589c613d491df19eea82087fb5c928ec8b56fade9cd23067\"" May 27 03:24:21.885560 containerd[1560]: time="2025-05-27T03:24:21.885532733Z" level=info msg="StartContainer for \"8cf523767de7405a589c613d491df19eea82087fb5c928ec8b56fade9cd23067\"" May 27 03:24:21.890707 containerd[1560]: time="2025-05-27T03:24:21.890651807Z" level=info msg="connecting to shim 8cf523767de7405a589c613d491df19eea82087fb5c928ec8b56fade9cd23067" address="unix:///run/containerd/s/0c01a3d4e46d674ed1c07795a2adf2dd4dc0bf837921e38463faa7f21860bf56" protocol=ttrpc version=3 May 27 03:24:21.919713 systemd[1]: Started cri-containerd-8cf523767de7405a589c613d491df19eea82087fb5c928ec8b56fade9cd23067.scope - libcontainer container 8cf523767de7405a589c613d491df19eea82087fb5c928ec8b56fade9cd23067. May 27 03:24:21.975727 containerd[1560]: time="2025-05-27T03:24:21.975067410Z" level=info msg="StartContainer for \"8cf523767de7405a589c613d491df19eea82087fb5c928ec8b56fade9cd23067\" returns successfully" May 27 03:24:23.019162 kubelet[2917]: E0527 03:24:23.019043 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2khsc" podUID="16daa161-3275-4d49-9e4c-ba4748828624" May 27 03:24:23.130782 kubelet[2917]: I0527 03:24:23.130724 2917 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:24:25.018959 kubelet[2917]: E0527 03:24:25.018891 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2khsc" podUID="16daa161-3275-4d49-9e4c-ba4748828624" May 27 03:24:26.140931 containerd[1560]: time="2025-05-27T03:24:26.140846654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:26.142143 containerd[1560]: time="2025-05-27T03:24:26.142102299Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 27 03:24:26.143103 containerd[1560]: time="2025-05-27T03:24:26.143063231Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:26.145055 containerd[1560]: time="2025-05-27T03:24:26.144993550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:26.145560 containerd[1560]: time="2025-05-27T03:24:26.145530396Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 4.291504634s" May 27 03:24:26.145560 containerd[1560]: time="2025-05-27T03:24:26.145555634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 27 03:24:26.147834 containerd[1560]: time="2025-05-27T03:24:26.147800813Z" level=info msg="CreateContainer within sandbox \"faf89b9b5b7318cffc662fa48acb453fb03c93ad21627a8e41c74bb81b1daca3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 27 03:24:26.157011 containerd[1560]: time="2025-05-27T03:24:26.155531598Z" level=info msg="Container 5bffa319100b67e5afff808a9e0b07d21dec7590be8afa2f79e3632350c9080e: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:26.176346 containerd[1560]: time="2025-05-27T03:24:26.176279187Z" level=info msg="CreateContainer within sandbox \"faf89b9b5b7318cffc662fa48acb453fb03c93ad21627a8e41c74bb81b1daca3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5bffa319100b67e5afff808a9e0b07d21dec7590be8afa2f79e3632350c9080e\"" May 27 03:24:26.176969 containerd[1560]: time="2025-05-27T03:24:26.176949103Z" level=info msg="StartContainer for \"5bffa319100b67e5afff808a9e0b07d21dec7590be8afa2f79e3632350c9080e\"" May 27 03:24:26.178589 containerd[1560]: time="2025-05-27T03:24:26.178555174Z" level=info msg="connecting to shim 5bffa319100b67e5afff808a9e0b07d21dec7590be8afa2f79e3632350c9080e" address="unix:///run/containerd/s/ee3179b252fb41b4edb23bc227064ad499f7bbd493fc8cb1e1ff28b64a7f5433" protocol=ttrpc version=3 May 27 03:24:26.203517 systemd[1]: Started cri-containerd-5bffa319100b67e5afff808a9e0b07d21dec7590be8afa2f79e3632350c9080e.scope - libcontainer container 5bffa319100b67e5afff808a9e0b07d21dec7590be8afa2f79e3632350c9080e. May 27 03:24:26.245927 containerd[1560]: time="2025-05-27T03:24:26.245840046Z" level=info msg="StartContainer for \"5bffa319100b67e5afff808a9e0b07d21dec7590be8afa2f79e3632350c9080e\" returns successfully" May 27 03:24:26.306112 kubelet[2917]: I0527 03:24:26.306087 2917 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:24:26.331554 kubelet[2917]: I0527 03:24:26.331496 2917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-79bbbcfd88-xz89b" podStartSLOduration=7.334650376 podStartE2EDuration="12.331478349s" podCreationTimestamp="2025-05-27 03:24:14 +0000 UTC" firstStartedPulling="2025-05-27 03:24:16.856399213 +0000 UTC m=+22.968199861" lastFinishedPulling="2025-05-27 03:24:21.853227186 +0000 UTC m=+27.965027834" observedRunningTime="2025-05-27 03:24:22.153187948 +0000 UTC m=+28.264988587" watchObservedRunningTime="2025-05-27 03:24:26.331478349 +0000 UTC m=+32.443278988" May 27 03:24:26.714527 systemd[1]: cri-containerd-5bffa319100b67e5afff808a9e0b07d21dec7590be8afa2f79e3632350c9080e.scope: Deactivated successfully. May 27 03:24:26.715534 systemd[1]: cri-containerd-5bffa319100b67e5afff808a9e0b07d21dec7590be8afa2f79e3632350c9080e.scope: Consumed 442ms CPU time, 167M memory peak, 7.4M read from disk, 170.9M written to disk. May 27 03:24:26.720608 containerd[1560]: time="2025-05-27T03:24:26.719111101Z" level=info msg="received exit event container_id:\"5bffa319100b67e5afff808a9e0b07d21dec7590be8afa2f79e3632350c9080e\" id:\"5bffa319100b67e5afff808a9e0b07d21dec7590be8afa2f79e3632350c9080e\" pid:3699 exited_at:{seconds:1748316266 nanos:717898868}" May 27 03:24:26.722743 containerd[1560]: time="2025-05-27T03:24:26.721055537Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5bffa319100b67e5afff808a9e0b07d21dec7590be8afa2f79e3632350c9080e\" id:\"5bffa319100b67e5afff808a9e0b07d21dec7590be8afa2f79e3632350c9080e\" pid:3699 exited_at:{seconds:1748316266 nanos:717898868}" May 27 03:24:26.751281 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5bffa319100b67e5afff808a9e0b07d21dec7590be8afa2f79e3632350c9080e-rootfs.mount: Deactivated successfully. May 27 03:24:26.783995 kubelet[2917]: I0527 03:24:26.783946 2917 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 27 03:24:26.864355 systemd[1]: Created slice kubepods-burstable-podad1f604e_228a_46e6_8d84_3e756063a5a6.slice - libcontainer container kubepods-burstable-podad1f604e_228a_46e6_8d84_3e756063a5a6.slice. May 27 03:24:26.872344 systemd[1]: Created slice kubepods-besteffort-podca6512dc_3c30_4df1_a23d_c1010c560e07.slice - libcontainer container kubepods-besteffort-podca6512dc_3c30_4df1_a23d_c1010c560e07.slice. May 27 03:24:26.879639 systemd[1]: Created slice kubepods-burstable-pod1f041155_bcd1_48dc_8d60_b341452c38cc.slice - libcontainer container kubepods-burstable-pod1f041155_bcd1_48dc_8d60_b341452c38cc.slice. May 27 03:24:26.884173 kubelet[2917]: I0527 03:24:26.884126 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9n6m\" (UniqueName: \"kubernetes.io/projected/e21e2574-2bc3-4743-b7b3-52873168749e-kube-api-access-l9n6m\") pod \"whisker-65dcc46459-7glcx\" (UID: \"e21e2574-2bc3-4743-b7b3-52873168749e\") " pod="calico-system/whisker-65dcc46459-7glcx" May 27 03:24:26.884517 kubelet[2917]: I0527 03:24:26.884185 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed-goldmane-ca-bundle\") pod \"goldmane-8f77d7b6c-xwqrr\" (UID: \"9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed\") " pod="calico-system/goldmane-8f77d7b6c-xwqrr" May 27 03:24:26.884517 kubelet[2917]: I0527 03:24:26.884214 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5533be32-126b-4dc3-8312-b3c9f524d817-calico-apiserver-certs\") pod \"calico-apiserver-5d4cbf9b6-76kdv\" (UID: \"5533be32-126b-4dc3-8312-b3c9f524d817\") " pod="calico-apiserver/calico-apiserver-5d4cbf9b6-76kdv" May 27 03:24:26.884517 kubelet[2917]: I0527 03:24:26.884238 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e21e2574-2bc3-4743-b7b3-52873168749e-whisker-backend-key-pair\") pod \"whisker-65dcc46459-7glcx\" (UID: \"e21e2574-2bc3-4743-b7b3-52873168749e\") " pod="calico-system/whisker-65dcc46459-7glcx" May 27 03:24:26.884517 kubelet[2917]: I0527 03:24:26.884259 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84c04d0d-989b-4366-9177-4deee6a5a097-tigera-ca-bundle\") pod \"calico-kube-controllers-597cd4d468-l9brq\" (UID: \"84c04d0d-989b-4366-9177-4deee6a5a097\") " pod="calico-system/calico-kube-controllers-597cd4d468-l9brq" May 27 03:24:26.884517 kubelet[2917]: I0527 03:24:26.884280 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snhv9\" (UniqueName: \"kubernetes.io/projected/ad1f604e-228a-46e6-8d84-3e756063a5a6-kube-api-access-snhv9\") pod \"coredns-7c65d6cfc9-2w9m9\" (UID: \"ad1f604e-228a-46e6-8d84-3e756063a5a6\") " pod="kube-system/coredns-7c65d6cfc9-2w9m9" May 27 03:24:26.884816 kubelet[2917]: I0527 03:24:26.884342 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7drb\" (UniqueName: \"kubernetes.io/projected/9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed-kube-api-access-f7drb\") pod \"goldmane-8f77d7b6c-xwqrr\" (UID: \"9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed\") " pod="calico-system/goldmane-8f77d7b6c-xwqrr" May 27 03:24:26.884816 kubelet[2917]: I0527 03:24:26.884367 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed-config\") pod \"goldmane-8f77d7b6c-xwqrr\" (UID: \"9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed\") " pod="calico-system/goldmane-8f77d7b6c-xwqrr" May 27 03:24:26.884816 kubelet[2917]: I0527 03:24:26.884387 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed-goldmane-key-pair\") pod \"goldmane-8f77d7b6c-xwqrr\" (UID: \"9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed\") " pod="calico-system/goldmane-8f77d7b6c-xwqrr" May 27 03:24:26.884816 kubelet[2917]: I0527 03:24:26.884407 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttrcx\" (UniqueName: \"kubernetes.io/projected/84c04d0d-989b-4366-9177-4deee6a5a097-kube-api-access-ttrcx\") pod \"calico-kube-controllers-597cd4d468-l9brq\" (UID: \"84c04d0d-989b-4366-9177-4deee6a5a097\") " pod="calico-system/calico-kube-controllers-597cd4d468-l9brq" May 27 03:24:26.884816 kubelet[2917]: I0527 03:24:26.884472 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8cw4\" (UniqueName: \"kubernetes.io/projected/5533be32-126b-4dc3-8312-b3c9f524d817-kube-api-access-b8cw4\") pod \"calico-apiserver-5d4cbf9b6-76kdv\" (UID: \"5533be32-126b-4dc3-8312-b3c9f524d817\") " pod="calico-apiserver/calico-apiserver-5d4cbf9b6-76kdv" May 27 03:24:26.885137 kubelet[2917]: I0527 03:24:26.884492 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad1f604e-228a-46e6-8d84-3e756063a5a6-config-volume\") pod \"coredns-7c65d6cfc9-2w9m9\" (UID: \"ad1f604e-228a-46e6-8d84-3e756063a5a6\") " pod="kube-system/coredns-7c65d6cfc9-2w9m9" May 27 03:24:26.885137 kubelet[2917]: I0527 03:24:26.884507 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4mmj\" (UniqueName: \"kubernetes.io/projected/ca6512dc-3c30-4df1-a23d-c1010c560e07-kube-api-access-f4mmj\") pod \"calico-apiserver-5d4cbf9b6-6l9vf\" (UID: \"ca6512dc-3c30-4df1-a23d-c1010c560e07\") " pod="calico-apiserver/calico-apiserver-5d4cbf9b6-6l9vf" May 27 03:24:26.885137 kubelet[2917]: I0527 03:24:26.884521 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f041155-bcd1-48dc-8d60-b341452c38cc-config-volume\") pod \"coredns-7c65d6cfc9-qn2bv\" (UID: \"1f041155-bcd1-48dc-8d60-b341452c38cc\") " pod="kube-system/coredns-7c65d6cfc9-qn2bv" May 27 03:24:26.885137 kubelet[2917]: I0527 03:24:26.884536 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktccr\" (UniqueName: \"kubernetes.io/projected/1f041155-bcd1-48dc-8d60-b341452c38cc-kube-api-access-ktccr\") pod \"coredns-7c65d6cfc9-qn2bv\" (UID: \"1f041155-bcd1-48dc-8d60-b341452c38cc\") " pod="kube-system/coredns-7c65d6cfc9-qn2bv" May 27 03:24:26.885137 kubelet[2917]: I0527 03:24:26.884690 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ca6512dc-3c30-4df1-a23d-c1010c560e07-calico-apiserver-certs\") pod \"calico-apiserver-5d4cbf9b6-6l9vf\" (UID: \"ca6512dc-3c30-4df1-a23d-c1010c560e07\") " pod="calico-apiserver/calico-apiserver-5d4cbf9b6-6l9vf" May 27 03:24:26.885460 kubelet[2917]: I0527 03:24:26.884723 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e21e2574-2bc3-4743-b7b3-52873168749e-whisker-ca-bundle\") pod \"whisker-65dcc46459-7glcx\" (UID: \"e21e2574-2bc3-4743-b7b3-52873168749e\") " pod="calico-system/whisker-65dcc46459-7glcx" May 27 03:24:26.892782 systemd[1]: Created slice kubepods-besteffort-pode21e2574_2bc3_4743_b7b3_52873168749e.slice - libcontainer container kubepods-besteffort-pode21e2574_2bc3_4743_b7b3_52873168749e.slice. May 27 03:24:26.911078 systemd[1]: Created slice kubepods-besteffort-pod9c59cb9e_2eb7_4e20_b889_53cd0bb9e4ed.slice - libcontainer container kubepods-besteffort-pod9c59cb9e_2eb7_4e20_b889_53cd0bb9e4ed.slice. May 27 03:24:26.924593 systemd[1]: Created slice kubepods-besteffort-pod84c04d0d_989b_4366_9177_4deee6a5a097.slice - libcontainer container kubepods-besteffort-pod84c04d0d_989b_4366_9177_4deee6a5a097.slice. May 27 03:24:26.932163 systemd[1]: Created slice kubepods-besteffort-pod5533be32_126b_4dc3_8312_b3c9f524d817.slice - libcontainer container kubepods-besteffort-pod5533be32_126b_4dc3_8312_b3c9f524d817.slice. May 27 03:24:27.037586 systemd[1]: Created slice kubepods-besteffort-pod16daa161_3275_4d49_9e4c_ba4748828624.slice - libcontainer container kubepods-besteffort-pod16daa161_3275_4d49_9e4c_ba4748828624.slice. May 27 03:24:27.046873 containerd[1560]: time="2025-05-27T03:24:27.046602118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2khsc,Uid:16daa161-3275-4d49-9e4c-ba4748828624,Namespace:calico-system,Attempt:0,}" May 27 03:24:27.178333 containerd[1560]: time="2025-05-27T03:24:27.177049489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d4cbf9b6-6l9vf,Uid:ca6512dc-3c30-4df1-a23d-c1010c560e07,Namespace:calico-apiserver,Attempt:0,}" May 27 03:24:27.180918 containerd[1560]: time="2025-05-27T03:24:27.178877407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2w9m9,Uid:ad1f604e-228a-46e6-8d84-3e756063a5a6,Namespace:kube-system,Attempt:0,}" May 27 03:24:27.191332 containerd[1560]: time="2025-05-27T03:24:27.189539980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qn2bv,Uid:1f041155-bcd1-48dc-8d60-b341452c38cc,Namespace:kube-system,Attempt:0,}" May 27 03:24:27.197620 containerd[1560]: time="2025-05-27T03:24:27.197587558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 27 03:24:27.202717 containerd[1560]: time="2025-05-27T03:24:27.202439867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65dcc46459-7glcx,Uid:e21e2574-2bc3-4743-b7b3-52873168749e,Namespace:calico-system,Attempt:0,}" May 27 03:24:27.222392 containerd[1560]: time="2025-05-27T03:24:27.222360098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-xwqrr,Uid:9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed,Namespace:calico-system,Attempt:0,}" May 27 03:24:27.238484 containerd[1560]: time="2025-05-27T03:24:27.237856985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-597cd4d468-l9brq,Uid:84c04d0d-989b-4366-9177-4deee6a5a097,Namespace:calico-system,Attempt:0,}" May 27 03:24:27.252696 containerd[1560]: time="2025-05-27T03:24:27.252399973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d4cbf9b6-76kdv,Uid:5533be32-126b-4dc3-8312-b3c9f524d817,Namespace:calico-apiserver,Attempt:0,}" May 27 03:24:27.397455 containerd[1560]: time="2025-05-27T03:24:27.397399610Z" level=error msg="Failed to destroy network for sandbox \"c3d3c58e80e02c584b20ff1b5718e04a102a5810aab020547fbf9a860a8c72c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:27.400325 containerd[1560]: time="2025-05-27T03:24:27.400254864Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2khsc,Uid:16daa161-3275-4d49-9e4c-ba4748828624,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3d3c58e80e02c584b20ff1b5718e04a102a5810aab020547fbf9a860a8c72c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:27.410090 kubelet[2917]: E0527 03:24:27.409905 2917 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3d3c58e80e02c584b20ff1b5718e04a102a5810aab020547fbf9a860a8c72c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:27.417904 kubelet[2917]: E0527 03:24:27.417860 2917 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3d3c58e80e02c584b20ff1b5718e04a102a5810aab020547fbf9a860a8c72c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2khsc" May 27 03:24:27.417904 kubelet[2917]: E0527 03:24:27.417903 2917 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3d3c58e80e02c584b20ff1b5718e04a102a5810aab020547fbf9a860a8c72c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2khsc" May 27 03:24:27.418833 kubelet[2917]: E0527 03:24:27.418784 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2khsc_calico-system(16daa161-3275-4d49-9e4c-ba4748828624)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2khsc_calico-system(16daa161-3275-4d49-9e4c-ba4748828624)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3d3c58e80e02c584b20ff1b5718e04a102a5810aab020547fbf9a860a8c72c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2khsc" podUID="16daa161-3275-4d49-9e4c-ba4748828624" May 27 03:24:27.462029 containerd[1560]: time="2025-05-27T03:24:27.461811995Z" level=error msg="Failed to destroy network for sandbox \"a2901aae216aed04aa3162a5930f1032ee94906d0b2e430891915c4c5f0466f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:27.465961 containerd[1560]: time="2025-05-27T03:24:27.465469763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qn2bv,Uid:1f041155-bcd1-48dc-8d60-b341452c38cc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2901aae216aed04aa3162a5930f1032ee94906d0b2e430891915c4c5f0466f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:27.466343 kubelet[2917]: E0527 03:24:27.465736 2917 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2901aae216aed04aa3162a5930f1032ee94906d0b2e430891915c4c5f0466f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:27.466343 kubelet[2917]: E0527 03:24:27.465840 2917 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2901aae216aed04aa3162a5930f1032ee94906d0b2e430891915c4c5f0466f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-qn2bv" May 27 03:24:27.466343 kubelet[2917]: E0527 03:24:27.465878 2917 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2901aae216aed04aa3162a5930f1032ee94906d0b2e430891915c4c5f0466f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-qn2bv" May 27 03:24:27.466455 kubelet[2917]: E0527 03:24:27.465920 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-qn2bv_kube-system(1f041155-bcd1-48dc-8d60-b341452c38cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-qn2bv_kube-system(1f041155-bcd1-48dc-8d60-b341452c38cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2901aae216aed04aa3162a5930f1032ee94906d0b2e430891915c4c5f0466f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-qn2bv" podUID="1f041155-bcd1-48dc-8d60-b341452c38cc" May 27 03:24:27.485535 containerd[1560]: time="2025-05-27T03:24:27.485485703Z" level=error msg="Failed to destroy network for sandbox \"d80a521f38e18fac4a401e87269c8f768cd87363ac95a7bfa2dfc8662608444e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:27.487494 containerd[1560]: time="2025-05-27T03:24:27.487448504Z" level=error msg="Failed to destroy network for sandbox \"add2acc6c3d264decd0830773b5f22ed5858ccd8e96c8313b9f73d3925b3a132\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:27.487806 containerd[1560]: time="2025-05-27T03:24:27.487638701Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-597cd4d468-l9brq,Uid:84c04d0d-989b-4366-9177-4deee6a5a097,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d80a521f38e18fac4a401e87269c8f768cd87363ac95a7bfa2dfc8662608444e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:27.488146 kubelet[2917]: E0527 03:24:27.488080 2917 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d80a521f38e18fac4a401e87269c8f768cd87363ac95a7bfa2dfc8662608444e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:27.488271 kubelet[2917]: E0527 03:24:27.488258 2917 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d80a521f38e18fac4a401e87269c8f768cd87363ac95a7bfa2dfc8662608444e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-597cd4d468-l9brq" May 27 03:24:27.488802 kubelet[2917]: E0527 03:24:27.488788 2917 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d80a521f38e18fac4a401e87269c8f768cd87363ac95a7bfa2dfc8662608444e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-597cd4d468-l9brq" May 27 03:24:27.489185 kubelet[2917]: E0527 03:24:27.489129 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-597cd4d468-l9brq_calico-system(84c04d0d-989b-4366-9177-4deee6a5a097)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-597cd4d468-l9brq_calico-system(84c04d0d-989b-4366-9177-4deee6a5a097)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d80a521f38e18fac4a401e87269c8f768cd87363ac95a7bfa2dfc8662608444e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-597cd4d468-l9brq" podUID="84c04d0d-989b-4366-9177-4deee6a5a097" May 27 03:24:27.491563 containerd[1560]: time="2025-05-27T03:24:27.491454705Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2w9m9,Uid:ad1f604e-228a-46e6-8d84-3e756063a5a6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"add2acc6c3d264decd0830773b5f22ed5858ccd8e96c8313b9f73d3925b3a132\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:27.492087 kubelet[2917]: E0527 03:24:27.492068 2917 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"add2acc6c3d264decd0830773b5f22ed5858ccd8e96c8313b9f73d3925b3a132\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:27.492192 kubelet[2917]: E0527 03:24:27.492178 2917 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"add2acc6c3d264decd0830773b5f22ed5858ccd8e96c8313b9f73d3925b3a132\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2w9m9" May 27 03:24:27.492275 kubelet[2917]: E0527 03:24:27.492262 2917 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"add2acc6c3d264decd0830773b5f22ed5858ccd8e96c8313b9f73d3925b3a132\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2w9m9" May 27 03:24:27.492534 kubelet[2917]: E0527 03:24:27.492383 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-2w9m9_kube-system(ad1f604e-228a-46e6-8d84-3e756063a5a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-2w9m9_kube-system(ad1f604e-228a-46e6-8d84-3e756063a5a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"add2acc6c3d264decd0830773b5f22ed5858ccd8e96c8313b9f73d3925b3a132\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-2w9m9" podUID="ad1f604e-228a-46e6-8d84-3e756063a5a6" May 27 03:24:27.500819 containerd[1560]: time="2025-05-27T03:24:27.500725639Z" level=error msg="Failed to destroy network for sandbox \"f2d0ace5eca3ed328d8817bd15d63e041c6f1d67cab982bd29b1e90dcb77c6c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:27.501865 containerd[1560]: time="2025-05-27T03:24:27.501747123Z" level=error msg="Failed to destroy network for sandbox \"84c4965e26c23fe35e8d01f12f48ca138feb6f380c6d0a7fb80b4472f89d8b37\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:27.503178 containerd[1560]: time="2025-05-27T03:24:27.503135647Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65dcc46459-7glcx,Uid:e21e2574-2bc3-4743-b7b3-52873168749e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2d0ace5eca3ed328d8817bd15d63e041c6f1d67cab982bd29b1e90dcb77c6c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:27.503698 kubelet[2917]: E0527 03:24:27.503618 2917 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2d0ace5eca3ed328d8817bd15d63e041c6f1d67cab982bd29b1e90dcb77c6c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:27.503698 kubelet[2917]: E0527 03:24:27.503677 2917 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2d0ace5eca3ed328d8817bd15d63e041c6f1d67cab982bd29b1e90dcb77c6c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65dcc46459-7glcx" May 27 03:24:27.503698 kubelet[2917]: E0527 03:24:27.503694 2917 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2d0ace5eca3ed328d8817bd15d63e041c6f1d67cab982bd29b1e90dcb77c6c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65dcc46459-7glcx" May 27 03:24:27.503834 kubelet[2917]: E0527 03:24:27.503736 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-65dcc46459-7glcx_calico-system(e21e2574-2bc3-4743-b7b3-52873168749e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-65dcc46459-7glcx_calico-system(e21e2574-2bc3-4743-b7b3-52873168749e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2d0ace5eca3ed328d8817bd15d63e041c6f1d67cab982bd29b1e90dcb77c6c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65dcc46459-7glcx" podUID="e21e2574-2bc3-4743-b7b3-52873168749e" May 27 03:24:27.505577 containerd[1560]: time="2025-05-27T03:24:27.505453534Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d4cbf9b6-76kdv,Uid:5533be32-126b-4dc3-8312-b3c9f524d817,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"84c4965e26c23fe35e8d01f12f48ca138feb6f380c6d0a7fb80b4472f89d8b37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:27.505921 containerd[1560]: time="2025-05-27T03:24:27.505785676Z" level=error msg="Failed to destroy network for sandbox \"f8fbb9dc5ff0ae9744c18b9693a2292b35c424ac6febb9ee704a87b0e5d8a868\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:27.506441 kubelet[2917]: E0527 03:24:27.505600 2917 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84c4965e26c23fe35e8d01f12f48ca138feb6f380c6d0a7fb80b4472f89d8b37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:27.506441 kubelet[2917]: E0527 03:24:27.505643 2917 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84c4965e26c23fe35e8d01f12f48ca138feb6f380c6d0a7fb80b4472f89d8b37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d4cbf9b6-76kdv" May 27 03:24:27.506441 kubelet[2917]: E0527 03:24:27.505657 2917 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84c4965e26c23fe35e8d01f12f48ca138feb6f380c6d0a7fb80b4472f89d8b37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d4cbf9b6-76kdv" May 27 03:24:27.507366 kubelet[2917]: E0527 03:24:27.505681 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d4cbf9b6-76kdv_calico-apiserver(5533be32-126b-4dc3-8312-b3c9f524d817)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d4cbf9b6-76kdv_calico-apiserver(5533be32-126b-4dc3-8312-b3c9f524d817)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84c4965e26c23fe35e8d01f12f48ca138feb6f380c6d0a7fb80b4472f89d8b37\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d4cbf9b6-76kdv" podUID="5533be32-126b-4dc3-8312-b3c9f524d817" May 27 03:24:27.507366 kubelet[2917]: E0527 03:24:27.507195 2917 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8fbb9dc5ff0ae9744c18b9693a2292b35c424ac6febb9ee704a87b0e5d8a868\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:27.507366 kubelet[2917]: E0527 03:24:27.507221 2917 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8fbb9dc5ff0ae9744c18b9693a2292b35c424ac6febb9ee704a87b0e5d8a868\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-xwqrr" May 27 03:24:27.507470 containerd[1560]: time="2025-05-27T03:24:27.507043455Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-xwqrr,Uid:9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8fbb9dc5ff0ae9744c18b9693a2292b35c424ac6febb9ee704a87b0e5d8a868\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:27.507524 kubelet[2917]: E0527 03:24:27.507261 2917 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8fbb9dc5ff0ae9744c18b9693a2292b35c424ac6febb9ee704a87b0e5d8a868\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-xwqrr" May 27 03:24:27.507524 kubelet[2917]: E0527 03:24:27.507297 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-8f77d7b6c-xwqrr_calico-system(9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-8f77d7b6c-xwqrr_calico-system(9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f8fbb9dc5ff0ae9744c18b9693a2292b35c424ac6febb9ee704a87b0e5d8a868\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:24:27.510609 containerd[1560]: time="2025-05-27T03:24:27.510469629Z" level=error msg="Failed to destroy network for sandbox \"02f45fac33bc80d518aceb95022070243d31b498e4eb55c8b29e001710eb217d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:27.513544 containerd[1560]: time="2025-05-27T03:24:27.513506944Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d4cbf9b6-6l9vf,Uid:ca6512dc-3c30-4df1-a23d-c1010c560e07,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"02f45fac33bc80d518aceb95022070243d31b498e4eb55c8b29e001710eb217d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:27.513863 kubelet[2917]: E0527 03:24:27.513827 2917 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02f45fac33bc80d518aceb95022070243d31b498e4eb55c8b29e001710eb217d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:27.513939 kubelet[2917]: E0527 03:24:27.513874 2917 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02f45fac33bc80d518aceb95022070243d31b498e4eb55c8b29e001710eb217d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d4cbf9b6-6l9vf" May 27 03:24:27.513939 kubelet[2917]: E0527 03:24:27.513894 2917 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02f45fac33bc80d518aceb95022070243d31b498e4eb55c8b29e001710eb217d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d4cbf9b6-6l9vf" May 27 03:24:27.514009 kubelet[2917]: E0527 03:24:27.513955 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d4cbf9b6-6l9vf_calico-apiserver(ca6512dc-3c30-4df1-a23d-c1010c560e07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d4cbf9b6-6l9vf_calico-apiserver(ca6512dc-3c30-4df1-a23d-c1010c560e07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02f45fac33bc80d518aceb95022070243d31b498e4eb55c8b29e001710eb217d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d4cbf9b6-6l9vf" podUID="ca6512dc-3c30-4df1-a23d-c1010c560e07" May 27 03:24:28.160829 systemd[1]: run-netns-cni\x2dcea8118b\x2dfb81\x2decaa\x2d6fa3\x2dfe78edc4fe1b.mount: Deactivated successfully. May 27 03:24:28.160942 systemd[1]: run-netns-cni\x2d4695f75a\x2d1ee9\x2d6798\x2d5ab3\x2dd95fd515b720.mount: Deactivated successfully. May 27 03:24:28.161002 systemd[1]: run-netns-cni\x2d4891bcf2\x2d899e\x2d3e65\x2d9c4c\x2d3f11581b7cc2.mount: Deactivated successfully. May 27 03:24:28.161061 systemd[1]: run-netns-cni\x2dd16293ea\x2d0eee\x2dcfc9\x2d43d4\x2d4b519b4c03be.mount: Deactivated successfully. May 27 03:24:28.161119 systemd[1]: run-netns-cni\x2d7a423ef7\x2deb7c\x2d781a\x2de253\x2d01bca1a05cc3.mount: Deactivated successfully. May 27 03:24:35.080843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount579674366.mount: Deactivated successfully. May 27 03:24:35.157581 containerd[1560]: time="2025-05-27T03:24:35.148115452Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:35.158829 containerd[1560]: time="2025-05-27T03:24:35.149859965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 27 03:24:35.174339 containerd[1560]: time="2025-05-27T03:24:35.173501222Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:35.175822 containerd[1560]: time="2025-05-27T03:24:35.175771762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:35.177898 containerd[1560]: time="2025-05-27T03:24:35.177834571Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 7.978348983s" May 27 03:24:35.177898 containerd[1560]: time="2025-05-27T03:24:35.177862243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 27 03:24:35.198963 containerd[1560]: time="2025-05-27T03:24:35.198911228Z" level=info msg="CreateContainer within sandbox \"faf89b9b5b7318cffc662fa48acb453fb03c93ad21627a8e41c74bb81b1daca3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 27 03:24:35.238149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3339949681.mount: Deactivated successfully. May 27 03:24:35.238572 containerd[1560]: time="2025-05-27T03:24:35.238539356Z" level=info msg="Container ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:35.285089 containerd[1560]: time="2025-05-27T03:24:35.285036850Z" level=info msg="CreateContainer within sandbox \"faf89b9b5b7318cffc662fa48acb453fb03c93ad21627a8e41c74bb81b1daca3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\"" May 27 03:24:35.285680 containerd[1560]: time="2025-05-27T03:24:35.285606900Z" level=info msg="StartContainer for \"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\"" May 27 03:24:35.302784 containerd[1560]: time="2025-05-27T03:24:35.302747964Z" level=info msg="connecting to shim ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b" address="unix:///run/containerd/s/ee3179b252fb41b4edb23bc227064ad499f7bbd493fc8cb1e1ff28b64a7f5433" protocol=ttrpc version=3 May 27 03:24:35.372079 systemd[1]: Started cri-containerd-ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b.scope - libcontainer container ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b. May 27 03:24:35.432981 containerd[1560]: time="2025-05-27T03:24:35.432861882Z" level=info msg="StartContainer for \"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" returns successfully" May 27 03:24:35.529645 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 27 03:24:35.530387 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 27 03:24:35.859883 kubelet[2917]: I0527 03:24:35.859441 2917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e21e2574-2bc3-4743-b7b3-52873168749e-whisker-backend-key-pair\") pod \"e21e2574-2bc3-4743-b7b3-52873168749e\" (UID: \"e21e2574-2bc3-4743-b7b3-52873168749e\") " May 27 03:24:35.859883 kubelet[2917]: I0527 03:24:35.859499 2917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e21e2574-2bc3-4743-b7b3-52873168749e-whisker-ca-bundle\") pod \"e21e2574-2bc3-4743-b7b3-52873168749e\" (UID: \"e21e2574-2bc3-4743-b7b3-52873168749e\") " May 27 03:24:35.859883 kubelet[2917]: I0527 03:24:35.859516 2917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9n6m\" (UniqueName: \"kubernetes.io/projected/e21e2574-2bc3-4743-b7b3-52873168749e-kube-api-access-l9n6m\") pod \"e21e2574-2bc3-4743-b7b3-52873168749e\" (UID: \"e21e2574-2bc3-4743-b7b3-52873168749e\") " May 27 03:24:35.864506 kubelet[2917]: I0527 03:24:35.864346 2917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e21e2574-2bc3-4743-b7b3-52873168749e-kube-api-access-l9n6m" (OuterVolumeSpecName: "kube-api-access-l9n6m") pod "e21e2574-2bc3-4743-b7b3-52873168749e" (UID: "e21e2574-2bc3-4743-b7b3-52873168749e"). InnerVolumeSpecName "kube-api-access-l9n6m". PluginName "kubernetes.io/projected", VolumeGidValue "" May 27 03:24:35.864933 kubelet[2917]: I0527 03:24:35.864909 2917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e21e2574-2bc3-4743-b7b3-52873168749e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e21e2574-2bc3-4743-b7b3-52873168749e" (UID: "e21e2574-2bc3-4743-b7b3-52873168749e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 27 03:24:35.865274 kubelet[2917]: I0527 03:24:35.865218 2917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e21e2574-2bc3-4743-b7b3-52873168749e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e21e2574-2bc3-4743-b7b3-52873168749e" (UID: "e21e2574-2bc3-4743-b7b3-52873168749e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" May 27 03:24:35.960707 kubelet[2917]: I0527 03:24:35.960625 2917 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e21e2574-2bc3-4743-b7b3-52873168749e-whisker-ca-bundle\") on node \"ci-4344-0-0-e-876c439243\" DevicePath \"\"" May 27 03:24:35.960707 kubelet[2917]: I0527 03:24:35.960666 2917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9n6m\" (UniqueName: \"kubernetes.io/projected/e21e2574-2bc3-4743-b7b3-52873168749e-kube-api-access-l9n6m\") on node \"ci-4344-0-0-e-876c439243\" DevicePath \"\"" May 27 03:24:35.960707 kubelet[2917]: I0527 03:24:35.960679 2917 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e21e2574-2bc3-4743-b7b3-52873168749e-whisker-backend-key-pair\") on node \"ci-4344-0-0-e-876c439243\" DevicePath \"\"" May 27 03:24:36.030298 systemd[1]: Removed slice kubepods-besteffort-pode21e2574_2bc3_4743_b7b3_52873168749e.slice - libcontainer container kubepods-besteffort-pode21e2574_2bc3_4743_b7b3_52873168749e.slice. May 27 03:24:36.083939 systemd[1]: var-lib-kubelet-pods-e21e2574\x2d2bc3\x2d4743\x2db7b3\x2d52873168749e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl9n6m.mount: Deactivated successfully. May 27 03:24:36.084578 systemd[1]: var-lib-kubelet-pods-e21e2574\x2d2bc3\x2d4743\x2db7b3\x2d52873168749e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 27 03:24:36.278857 kubelet[2917]: I0527 03:24:36.274846 2917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8n9q7" podStartSLOduration=2.760891843 podStartE2EDuration="21.274790166s" podCreationTimestamp="2025-05-27 03:24:15 +0000 UTC" firstStartedPulling="2025-05-27 03:24:16.664897723 +0000 UTC m=+22.776698360" lastFinishedPulling="2025-05-27 03:24:35.178796045 +0000 UTC m=+41.290596683" observedRunningTime="2025-05-27 03:24:36.270363863 +0000 UTC m=+42.382164571" watchObservedRunningTime="2025-05-27 03:24:36.274790166 +0000 UTC m=+42.386590834" May 27 03:24:36.362111 systemd[1]: Created slice kubepods-besteffort-pod20923581_35ae_477b_83e9_35d75acd3c66.slice - libcontainer container kubepods-besteffort-pod20923581_35ae_477b_83e9_35d75acd3c66.slice. May 27 03:24:36.464652 kubelet[2917]: I0527 03:24:36.464488 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20923581-35ae-477b-83e9-35d75acd3c66-whisker-ca-bundle\") pod \"whisker-555bcbc6ff-596vx\" (UID: \"20923581-35ae-477b-83e9-35d75acd3c66\") " pod="calico-system/whisker-555bcbc6ff-596vx" May 27 03:24:36.464652 kubelet[2917]: I0527 03:24:36.464552 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/20923581-35ae-477b-83e9-35d75acd3c66-whisker-backend-key-pair\") pod \"whisker-555bcbc6ff-596vx\" (UID: \"20923581-35ae-477b-83e9-35d75acd3c66\") " pod="calico-system/whisker-555bcbc6ff-596vx" May 27 03:24:36.464652 kubelet[2917]: I0527 03:24:36.464575 2917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jzjr\" (UniqueName: \"kubernetes.io/projected/20923581-35ae-477b-83e9-35d75acd3c66-kube-api-access-6jzjr\") pod \"whisker-555bcbc6ff-596vx\" (UID: \"20923581-35ae-477b-83e9-35d75acd3c66\") " pod="calico-system/whisker-555bcbc6ff-596vx" May 27 03:24:36.669099 containerd[1560]: time="2025-05-27T03:24:36.669020862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-555bcbc6ff-596vx,Uid:20923581-35ae-477b-83e9-35d75acd3c66,Namespace:calico-system,Attempt:0,}" May 27 03:24:37.027125 systemd-networkd[1472]: cali14d8542a8d2: Link UP May 27 03:24:37.028457 systemd-networkd[1472]: cali14d8542a8d2: Gained carrier May 27 03:24:37.063205 containerd[1560]: 2025-05-27 03:24:36.694 [INFO][4037] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 03:24:37.063205 containerd[1560]: 2025-05-27 03:24:36.728 [INFO][4037] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--0--0--e--876c439243-k8s-whisker--555bcbc6ff--596vx-eth0 whisker-555bcbc6ff- calico-system 20923581-35ae-477b-83e9-35d75acd3c66 880 0 2025-05-27 03:24:36 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:555bcbc6ff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4344-0-0-e-876c439243 whisker-555bcbc6ff-596vx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali14d8542a8d2 [] [] }} ContainerID="35994e0ab62610f3a358787a7a33ffd0af91d62b991202d7690c7630ccf42d95" Namespace="calico-system" Pod="whisker-555bcbc6ff-596vx" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-whisker--555bcbc6ff--596vx-" May 27 03:24:37.063205 containerd[1560]: 2025-05-27 03:24:36.728 [INFO][4037] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="35994e0ab62610f3a358787a7a33ffd0af91d62b991202d7690c7630ccf42d95" Namespace="calico-system" Pod="whisker-555bcbc6ff-596vx" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-whisker--555bcbc6ff--596vx-eth0" May 27 03:24:37.063205 containerd[1560]: 2025-05-27 03:24:36.937 [INFO][4047] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35994e0ab62610f3a358787a7a33ffd0af91d62b991202d7690c7630ccf42d95" HandleID="k8s-pod-network.35994e0ab62610f3a358787a7a33ffd0af91d62b991202d7690c7630ccf42d95" Workload="ci--4344--0--0--e--876c439243-k8s-whisker--555bcbc6ff--596vx-eth0" May 27 03:24:37.064399 containerd[1560]: 2025-05-27 03:24:36.940 [INFO][4047] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="35994e0ab62610f3a358787a7a33ffd0af91d62b991202d7690c7630ccf42d95" HandleID="k8s-pod-network.35994e0ab62610f3a358787a7a33ffd0af91d62b991202d7690c7630ccf42d95" Workload="ci--4344--0--0--e--876c439243-k8s-whisker--555bcbc6ff--596vx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d3640), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344-0-0-e-876c439243", "pod":"whisker-555bcbc6ff-596vx", "timestamp":"2025-05-27 03:24:36.937175986 +0000 UTC"}, Hostname:"ci-4344-0-0-e-876c439243", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:37.064399 containerd[1560]: 2025-05-27 03:24:36.940 [INFO][4047] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:37.064399 containerd[1560]: 2025-05-27 03:24:36.940 [INFO][4047] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:37.064399 containerd[1560]: 2025-05-27 03:24:36.940 [INFO][4047] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-0-0-e-876c439243' May 27 03:24:37.064399 containerd[1560]: 2025-05-27 03:24:36.956 [INFO][4047] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.35994e0ab62610f3a358787a7a33ffd0af91d62b991202d7690c7630ccf42d95" host="ci-4344-0-0-e-876c439243" May 27 03:24:37.064399 containerd[1560]: 2025-05-27 03:24:36.969 [INFO][4047] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-0-0-e-876c439243" May 27 03:24:37.064399 containerd[1560]: 2025-05-27 03:24:36.976 [INFO][4047] ipam/ipam.go 511: Trying affinity for 192.168.93.64/26 host="ci-4344-0-0-e-876c439243" May 27 03:24:37.064399 containerd[1560]: 2025-05-27 03:24:36.978 [INFO][4047] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.64/26 host="ci-4344-0-0-e-876c439243" May 27 03:24:37.064399 containerd[1560]: 2025-05-27 03:24:36.982 [INFO][4047] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.64/26 host="ci-4344-0-0-e-876c439243" May 27 03:24:37.064670 containerd[1560]: 2025-05-27 03:24:36.982 [INFO][4047] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.93.64/26 handle="k8s-pod-network.35994e0ab62610f3a358787a7a33ffd0af91d62b991202d7690c7630ccf42d95" host="ci-4344-0-0-e-876c439243" May 27 03:24:37.064670 containerd[1560]: 2025-05-27 03:24:36.985 [INFO][4047] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.35994e0ab62610f3a358787a7a33ffd0af91d62b991202d7690c7630ccf42d95 May 27 03:24:37.064670 containerd[1560]: 2025-05-27 03:24:36.991 [INFO][4047] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.93.64/26 handle="k8s-pod-network.35994e0ab62610f3a358787a7a33ffd0af91d62b991202d7690c7630ccf42d95" host="ci-4344-0-0-e-876c439243" May 27 03:24:37.064670 containerd[1560]: 2025-05-27 03:24:37.002 [INFO][4047] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.93.65/26] block=192.168.93.64/26 handle="k8s-pod-network.35994e0ab62610f3a358787a7a33ffd0af91d62b991202d7690c7630ccf42d95" host="ci-4344-0-0-e-876c439243" May 27 03:24:37.064670 containerd[1560]: 2025-05-27 03:24:37.002 [INFO][4047] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.65/26] handle="k8s-pod-network.35994e0ab62610f3a358787a7a33ffd0af91d62b991202d7690c7630ccf42d95" host="ci-4344-0-0-e-876c439243" May 27 03:24:37.064670 containerd[1560]: 2025-05-27 03:24:37.002 [INFO][4047] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:37.064670 containerd[1560]: 2025-05-27 03:24:37.002 [INFO][4047] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.65/26] IPv6=[] ContainerID="35994e0ab62610f3a358787a7a33ffd0af91d62b991202d7690c7630ccf42d95" HandleID="k8s-pod-network.35994e0ab62610f3a358787a7a33ffd0af91d62b991202d7690c7630ccf42d95" Workload="ci--4344--0--0--e--876c439243-k8s-whisker--555bcbc6ff--596vx-eth0" May 27 03:24:37.064800 containerd[1560]: 2025-05-27 03:24:37.006 [INFO][4037] cni-plugin/k8s.go 418: Populated endpoint ContainerID="35994e0ab62610f3a358787a7a33ffd0af91d62b991202d7690c7630ccf42d95" Namespace="calico-system" Pod="whisker-555bcbc6ff-596vx" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-whisker--555bcbc6ff--596vx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--e--876c439243-k8s-whisker--555bcbc6ff--596vx-eth0", GenerateName:"whisker-555bcbc6ff-", Namespace:"calico-system", SelfLink:"", UID:"20923581-35ae-477b-83e9-35d75acd3c66", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 24, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"555bcbc6ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-e-876c439243", ContainerID:"", Pod:"whisker-555bcbc6ff-596vx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.93.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali14d8542a8d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:37.064800 containerd[1560]: 2025-05-27 03:24:37.007 [INFO][4037] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.65/32] ContainerID="35994e0ab62610f3a358787a7a33ffd0af91d62b991202d7690c7630ccf42d95" Namespace="calico-system" Pod="whisker-555bcbc6ff-596vx" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-whisker--555bcbc6ff--596vx-eth0" May 27 03:24:37.064872 containerd[1560]: 2025-05-27 03:24:37.007 [INFO][4037] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali14d8542a8d2 ContainerID="35994e0ab62610f3a358787a7a33ffd0af91d62b991202d7690c7630ccf42d95" Namespace="calico-system" Pod="whisker-555bcbc6ff-596vx" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-whisker--555bcbc6ff--596vx-eth0" May 27 03:24:37.064872 containerd[1560]: 2025-05-27 03:24:37.029 [INFO][4037] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35994e0ab62610f3a358787a7a33ffd0af91d62b991202d7690c7630ccf42d95" Namespace="calico-system" Pod="whisker-555bcbc6ff-596vx" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-whisker--555bcbc6ff--596vx-eth0" May 27 03:24:37.065777 containerd[1560]: 2025-05-27 03:24:37.029 [INFO][4037] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="35994e0ab62610f3a358787a7a33ffd0af91d62b991202d7690c7630ccf42d95" Namespace="calico-system" Pod="whisker-555bcbc6ff-596vx" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-whisker--555bcbc6ff--596vx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--e--876c439243-k8s-whisker--555bcbc6ff--596vx-eth0", GenerateName:"whisker-555bcbc6ff-", Namespace:"calico-system", SelfLink:"", UID:"20923581-35ae-477b-83e9-35d75acd3c66", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 24, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"555bcbc6ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-e-876c439243", ContainerID:"35994e0ab62610f3a358787a7a33ffd0af91d62b991202d7690c7630ccf42d95", Pod:"whisker-555bcbc6ff-596vx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.93.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali14d8542a8d2", MAC:"56:b9:c0:65:cb:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:37.065837 containerd[1560]: 2025-05-27 03:24:37.048 [INFO][4037] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="35994e0ab62610f3a358787a7a33ffd0af91d62b991202d7690c7630ccf42d95" Namespace="calico-system" Pod="whisker-555bcbc6ff-596vx" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-whisker--555bcbc6ff--596vx-eth0" May 27 03:24:37.358971 containerd[1560]: time="2025-05-27T03:24:37.358723380Z" level=info msg="connecting to shim 35994e0ab62610f3a358787a7a33ffd0af91d62b991202d7690c7630ccf42d95" address="unix:///run/containerd/s/216ee31afbd3d68146354a282eed5f145ba636f49bfd7ac5372b5c9fa1492832" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:37.410167 systemd[1]: Started cri-containerd-35994e0ab62610f3a358787a7a33ffd0af91d62b991202d7690c7630ccf42d95.scope - libcontainer container 35994e0ab62610f3a358787a7a33ffd0af91d62b991202d7690c7630ccf42d95. May 27 03:24:37.558214 containerd[1560]: time="2025-05-27T03:24:37.558146461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-555bcbc6ff-596vx,Uid:20923581-35ae-477b-83e9-35d75acd3c66,Namespace:calico-system,Attempt:0,} returns sandbox id \"35994e0ab62610f3a358787a7a33ffd0af91d62b991202d7690c7630ccf42d95\"" May 27 03:24:37.561181 containerd[1560]: time="2025-05-27T03:24:37.561154745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:24:37.638456 containerd[1560]: time="2025-05-27T03:24:37.636803166Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"00df23b44ebd66b4a76e1e0914aa75cf34afe0d16c27010f791357db2e8d6294\" pid:4207 exit_status:1 exited_at:{seconds:1748316277 nanos:636503523}" May 27 03:24:37.924655 containerd[1560]: time="2025-05-27T03:24:37.924585765Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:37.929074 containerd[1560]: time="2025-05-27T03:24:37.925952068Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:37.929366 containerd[1560]: time="2025-05-27T03:24:37.926811690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:24:37.929882 kubelet[2917]: E0527 03:24:37.929600 2917 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:24:37.933553 kubelet[2917]: E0527 03:24:37.930269 2917 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:24:37.937823 systemd-networkd[1472]: vxlan.calico: Link UP May 27 03:24:37.937831 systemd-networkd[1472]: vxlan.calico: Gained carrier May 27 03:24:37.942988 kubelet[2917]: E0527 03:24:37.941553 2917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:bee51492bca3428982f094867f4c4710,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6jzjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-555bcbc6ff-596vx_calico-system(20923581-35ae-477b-83e9-35d75acd3c66): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:37.952136 containerd[1560]: time="2025-05-27T03:24:37.951750969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:24:38.020782 containerd[1560]: time="2025-05-27T03:24:38.020740789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-597cd4d468-l9brq,Uid:84c04d0d-989b-4366-9177-4deee6a5a097,Namespace:calico-system,Attempt:0,}" May 27 03:24:38.024349 kubelet[2917]: I0527 03:24:38.024278 2917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e21e2574-2bc3-4743-b7b3-52873168749e" path="/var/lib/kubelet/pods/e21e2574-2bc3-4743-b7b3-52873168749e/volumes" May 27 03:24:38.162323 systemd-networkd[1472]: cali1c479009d06: Link UP May 27 03:24:38.163560 systemd-networkd[1472]: cali1c479009d06: Gained carrier May 27 03:24:38.183079 containerd[1560]: 2025-05-27 03:24:38.083 [INFO][4292] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--0--0--e--876c439243-k8s-calico--kube--controllers--597cd4d468--l9brq-eth0 calico-kube-controllers-597cd4d468- calico-system 84c04d0d-989b-4366-9177-4deee6a5a097 809 0 2025-05-27 03:24:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:597cd4d468 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4344-0-0-e-876c439243 calico-kube-controllers-597cd4d468-l9brq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1c479009d06 [] [] }} ContainerID="bf600c6bf6c300fd986e2592eaaea620bc22e474bb01b4ce18013673794d700b" Namespace="calico-system" Pod="calico-kube-controllers-597cd4d468-l9brq" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-calico--kube--controllers--597cd4d468--l9brq-" May 27 03:24:38.183079 containerd[1560]: 2025-05-27 03:24:38.083 [INFO][4292] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bf600c6bf6c300fd986e2592eaaea620bc22e474bb01b4ce18013673794d700b" Namespace="calico-system" Pod="calico-kube-controllers-597cd4d468-l9brq" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-calico--kube--controllers--597cd4d468--l9brq-eth0" May 27 03:24:38.183079 containerd[1560]: 2025-05-27 03:24:38.114 [INFO][4305] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bf600c6bf6c300fd986e2592eaaea620bc22e474bb01b4ce18013673794d700b" HandleID="k8s-pod-network.bf600c6bf6c300fd986e2592eaaea620bc22e474bb01b4ce18013673794d700b" Workload="ci--4344--0--0--e--876c439243-k8s-calico--kube--controllers--597cd4d468--l9brq-eth0" May 27 03:24:38.183259 containerd[1560]: 2025-05-27 03:24:38.114 [INFO][4305] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bf600c6bf6c300fd986e2592eaaea620bc22e474bb01b4ce18013673794d700b" HandleID="k8s-pod-network.bf600c6bf6c300fd986e2592eaaea620bc22e474bb01b4ce18013673794d700b" Workload="ci--4344--0--0--e--876c439243-k8s-calico--kube--controllers--597cd4d468--l9brq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000233d10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344-0-0-e-876c439243", "pod":"calico-kube-controllers-597cd4d468-l9brq", "timestamp":"2025-05-27 03:24:38.114404227 +0000 UTC"}, Hostname:"ci-4344-0-0-e-876c439243", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:38.183259 containerd[1560]: 2025-05-27 03:24:38.114 [INFO][4305] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:38.183259 containerd[1560]: 2025-05-27 03:24:38.114 [INFO][4305] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:38.183259 containerd[1560]: 2025-05-27 03:24:38.114 [INFO][4305] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-0-0-e-876c439243' May 27 03:24:38.183259 containerd[1560]: 2025-05-27 03:24:38.121 [INFO][4305] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bf600c6bf6c300fd986e2592eaaea620bc22e474bb01b4ce18013673794d700b" host="ci-4344-0-0-e-876c439243" May 27 03:24:38.183259 containerd[1560]: 2025-05-27 03:24:38.127 [INFO][4305] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-0-0-e-876c439243" May 27 03:24:38.183259 containerd[1560]: 2025-05-27 03:24:38.131 [INFO][4305] ipam/ipam.go 511: Trying affinity for 192.168.93.64/26 host="ci-4344-0-0-e-876c439243" May 27 03:24:38.183259 containerd[1560]: 2025-05-27 03:24:38.134 [INFO][4305] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.64/26 host="ci-4344-0-0-e-876c439243" May 27 03:24:38.183259 containerd[1560]: 2025-05-27 03:24:38.137 [INFO][4305] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.64/26 host="ci-4344-0-0-e-876c439243" May 27 03:24:38.184182 containerd[1560]: 2025-05-27 03:24:38.137 [INFO][4305] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.93.64/26 handle="k8s-pod-network.bf600c6bf6c300fd986e2592eaaea620bc22e474bb01b4ce18013673794d700b" host="ci-4344-0-0-e-876c439243" May 27 03:24:38.184182 containerd[1560]: 2025-05-27 03:24:38.138 [INFO][4305] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bf600c6bf6c300fd986e2592eaaea620bc22e474bb01b4ce18013673794d700b May 27 03:24:38.184182 containerd[1560]: 2025-05-27 03:24:38.143 [INFO][4305] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.93.64/26 handle="k8s-pod-network.bf600c6bf6c300fd986e2592eaaea620bc22e474bb01b4ce18013673794d700b" host="ci-4344-0-0-e-876c439243" May 27 03:24:38.184182 containerd[1560]: 2025-05-27 03:24:38.149 [INFO][4305] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.93.66/26] block=192.168.93.64/26 handle="k8s-pod-network.bf600c6bf6c300fd986e2592eaaea620bc22e474bb01b4ce18013673794d700b" host="ci-4344-0-0-e-876c439243" May 27 03:24:38.184182 containerd[1560]: 2025-05-27 03:24:38.150 [INFO][4305] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.66/26] handle="k8s-pod-network.bf600c6bf6c300fd986e2592eaaea620bc22e474bb01b4ce18013673794d700b" host="ci-4344-0-0-e-876c439243" May 27 03:24:38.184182 containerd[1560]: 2025-05-27 03:24:38.150 [INFO][4305] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:38.184182 containerd[1560]: 2025-05-27 03:24:38.150 [INFO][4305] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.66/26] IPv6=[] ContainerID="bf600c6bf6c300fd986e2592eaaea620bc22e474bb01b4ce18013673794d700b" HandleID="k8s-pod-network.bf600c6bf6c300fd986e2592eaaea620bc22e474bb01b4ce18013673794d700b" Workload="ci--4344--0--0--e--876c439243-k8s-calico--kube--controllers--597cd4d468--l9brq-eth0" May 27 03:24:38.186541 containerd[1560]: 2025-05-27 03:24:38.156 [INFO][4292] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bf600c6bf6c300fd986e2592eaaea620bc22e474bb01b4ce18013673794d700b" Namespace="calico-system" Pod="calico-kube-controllers-597cd4d468-l9brq" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-calico--kube--controllers--597cd4d468--l9brq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--e--876c439243-k8s-calico--kube--controllers--597cd4d468--l9brq-eth0", GenerateName:"calico-kube-controllers-597cd4d468-", Namespace:"calico-system", SelfLink:"", UID:"84c04d0d-989b-4366-9177-4deee6a5a097", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 24, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"597cd4d468", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-e-876c439243", ContainerID:"", Pod:"calico-kube-controllers-597cd4d468-l9brq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.93.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1c479009d06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:38.186631 containerd[1560]: 2025-05-27 03:24:38.156 [INFO][4292] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.66/32] ContainerID="bf600c6bf6c300fd986e2592eaaea620bc22e474bb01b4ce18013673794d700b" Namespace="calico-system" Pod="calico-kube-controllers-597cd4d468-l9brq" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-calico--kube--controllers--597cd4d468--l9brq-eth0" May 27 03:24:38.186631 containerd[1560]: 2025-05-27 03:24:38.156 [INFO][4292] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c479009d06 ContainerID="bf600c6bf6c300fd986e2592eaaea620bc22e474bb01b4ce18013673794d700b" Namespace="calico-system" Pod="calico-kube-controllers-597cd4d468-l9brq" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-calico--kube--controllers--597cd4d468--l9brq-eth0" May 27 03:24:38.186631 containerd[1560]: 2025-05-27 03:24:38.164 [INFO][4292] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bf600c6bf6c300fd986e2592eaaea620bc22e474bb01b4ce18013673794d700b" Namespace="calico-system" Pod="calico-kube-controllers-597cd4d468-l9brq" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-calico--kube--controllers--597cd4d468--l9brq-eth0" May 27 03:24:38.186700 containerd[1560]: 2025-05-27 03:24:38.165 [INFO][4292] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bf600c6bf6c300fd986e2592eaaea620bc22e474bb01b4ce18013673794d700b" Namespace="calico-system" Pod="calico-kube-controllers-597cd4d468-l9brq" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-calico--kube--controllers--597cd4d468--l9brq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--e--876c439243-k8s-calico--kube--controllers--597cd4d468--l9brq-eth0", GenerateName:"calico-kube-controllers-597cd4d468-", Namespace:"calico-system", SelfLink:"", UID:"84c04d0d-989b-4366-9177-4deee6a5a097", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 24, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"597cd4d468", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-e-876c439243", ContainerID:"bf600c6bf6c300fd986e2592eaaea620bc22e474bb01b4ce18013673794d700b", Pod:"calico-kube-controllers-597cd4d468-l9brq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.93.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1c479009d06", MAC:"b2:25:69:ba:d0:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:38.186750 containerd[1560]: 2025-05-27 03:24:38.178 [INFO][4292] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bf600c6bf6c300fd986e2592eaaea620bc22e474bb01b4ce18013673794d700b" Namespace="calico-system" Pod="calico-kube-controllers-597cd4d468-l9brq" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-calico--kube--controllers--597cd4d468--l9brq-eth0" May 27 03:24:38.217980 containerd[1560]: time="2025-05-27T03:24:38.217924836Z" level=info msg="connecting to shim bf600c6bf6c300fd986e2592eaaea620bc22e474bb01b4ce18013673794d700b" address="unix:///run/containerd/s/5e18b9e20c41e6fd0433a7505ade30adc5d70abc7131ddb8ab89d390fa3357f0" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:38.252077 systemd[1]: Started cri-containerd-bf600c6bf6c300fd986e2592eaaea620bc22e474bb01b4ce18013673794d700b.scope - libcontainer container bf600c6bf6c300fd986e2592eaaea620bc22e474bb01b4ce18013673794d700b. May 27 03:24:38.294806 containerd[1560]: time="2025-05-27T03:24:38.294754581Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:38.296100 containerd[1560]: time="2025-05-27T03:24:38.295933994Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:38.296100 containerd[1560]: time="2025-05-27T03:24:38.296041075Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:24:38.296495 kubelet[2917]: E0527 03:24:38.296230 2917 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:24:38.296495 kubelet[2917]: E0527 03:24:38.296286 2917 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:24:38.297324 kubelet[2917]: E0527 03:24:38.297185 2917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jzjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-555bcbc6ff-596vx_calico-system(20923581-35ae-477b-83e9-35d75acd3c66): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:38.298427 kubelet[2917]: E0527 03:24:38.298379 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:24:38.334576 containerd[1560]: time="2025-05-27T03:24:38.334531449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-597cd4d468-l9brq,Uid:84c04d0d-989b-4366-9177-4deee6a5a097,Namespace:calico-system,Attempt:0,} returns sandbox id \"bf600c6bf6c300fd986e2592eaaea620bc22e474bb01b4ce18013673794d700b\"" May 27 03:24:38.339000 containerd[1560]: time="2025-05-27T03:24:38.337733006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 27 03:24:38.391253 containerd[1560]: time="2025-05-27T03:24:38.391185039Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"9afd3093b4281c35cea99040dcd69e597c3e4c2602394868838f664eff5d5a92\" pid:4386 exit_status:1 exited_at:{seconds:1748316278 nanos:390740014}" May 27 03:24:39.021148 containerd[1560]: time="2025-05-27T03:24:39.021089919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d4cbf9b6-76kdv,Uid:5533be32-126b-4dc3-8312-b3c9f524d817,Namespace:calico-apiserver,Attempt:0,}" May 27 03:24:39.081531 systemd-networkd[1472]: cali14d8542a8d2: Gained IPv6LL May 27 03:24:39.151803 systemd-networkd[1472]: cali3151643a486: Link UP May 27 03:24:39.152695 systemd-networkd[1472]: cali3151643a486: Gained carrier May 27 03:24:39.171729 containerd[1560]: 2025-05-27 03:24:39.072 [INFO][4428] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--0--0--e--876c439243-k8s-calico--apiserver--5d4cbf9b6--76kdv-eth0 calico-apiserver-5d4cbf9b6- calico-apiserver 5533be32-126b-4dc3-8312-b3c9f524d817 813 0 2025-05-27 03:24:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d4cbf9b6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344-0-0-e-876c439243 calico-apiserver-5d4cbf9b6-76kdv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3151643a486 [] [] }} ContainerID="e2c47979994994b4d9188165ba51d83b2673c5503f35996453e95a63a45b02ee" Namespace="calico-apiserver" Pod="calico-apiserver-5d4cbf9b6-76kdv" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-calico--apiserver--5d4cbf9b6--76kdv-" May 27 03:24:39.171729 containerd[1560]: 2025-05-27 03:24:39.073 [INFO][4428] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e2c47979994994b4d9188165ba51d83b2673c5503f35996453e95a63a45b02ee" Namespace="calico-apiserver" Pod="calico-apiserver-5d4cbf9b6-76kdv" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-calico--apiserver--5d4cbf9b6--76kdv-eth0" May 27 03:24:39.171729 containerd[1560]: 2025-05-27 03:24:39.111 [INFO][4440] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2c47979994994b4d9188165ba51d83b2673c5503f35996453e95a63a45b02ee" HandleID="k8s-pod-network.e2c47979994994b4d9188165ba51d83b2673c5503f35996453e95a63a45b02ee" Workload="ci--4344--0--0--e--876c439243-k8s-calico--apiserver--5d4cbf9b6--76kdv-eth0" May 27 03:24:39.171940 containerd[1560]: 2025-05-27 03:24:39.111 [INFO][4440] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e2c47979994994b4d9188165ba51d83b2673c5503f35996453e95a63a45b02ee" HandleID="k8s-pod-network.e2c47979994994b4d9188165ba51d83b2673c5503f35996453e95a63a45b02ee" Workload="ci--4344--0--0--e--876c439243-k8s-calico--apiserver--5d4cbf9b6--76kdv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344-0-0-e-876c439243", "pod":"calico-apiserver-5d4cbf9b6-76kdv", "timestamp":"2025-05-27 03:24:39.111642617 +0000 UTC"}, Hostname:"ci-4344-0-0-e-876c439243", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:39.171940 containerd[1560]: 2025-05-27 03:24:39.111 [INFO][4440] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:39.171940 containerd[1560]: 2025-05-27 03:24:39.112 [INFO][4440] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:39.171940 containerd[1560]: 2025-05-27 03:24:39.112 [INFO][4440] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-0-0-e-876c439243' May 27 03:24:39.171940 containerd[1560]: 2025-05-27 03:24:39.118 [INFO][4440] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e2c47979994994b4d9188165ba51d83b2673c5503f35996453e95a63a45b02ee" host="ci-4344-0-0-e-876c439243" May 27 03:24:39.171940 containerd[1560]: 2025-05-27 03:24:39.123 [INFO][4440] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-0-0-e-876c439243" May 27 03:24:39.171940 containerd[1560]: 2025-05-27 03:24:39.128 [INFO][4440] ipam/ipam.go 511: Trying affinity for 192.168.93.64/26 host="ci-4344-0-0-e-876c439243" May 27 03:24:39.171940 containerd[1560]: 2025-05-27 03:24:39.130 [INFO][4440] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.64/26 host="ci-4344-0-0-e-876c439243" May 27 03:24:39.171940 containerd[1560]: 2025-05-27 03:24:39.132 [INFO][4440] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.64/26 host="ci-4344-0-0-e-876c439243" May 27 03:24:39.172170 containerd[1560]: 2025-05-27 03:24:39.133 [INFO][4440] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.93.64/26 handle="k8s-pod-network.e2c47979994994b4d9188165ba51d83b2673c5503f35996453e95a63a45b02ee" host="ci-4344-0-0-e-876c439243" May 27 03:24:39.172170 containerd[1560]: 2025-05-27 03:24:39.135 [INFO][4440] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e2c47979994994b4d9188165ba51d83b2673c5503f35996453e95a63a45b02ee May 27 03:24:39.172170 containerd[1560]: 2025-05-27 03:24:39.140 [INFO][4440] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.93.64/26 handle="k8s-pod-network.e2c47979994994b4d9188165ba51d83b2673c5503f35996453e95a63a45b02ee" host="ci-4344-0-0-e-876c439243" May 27 03:24:39.172170 containerd[1560]: 2025-05-27 03:24:39.146 [INFO][4440] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.93.67/26] block=192.168.93.64/26 handle="k8s-pod-network.e2c47979994994b4d9188165ba51d83b2673c5503f35996453e95a63a45b02ee" host="ci-4344-0-0-e-876c439243" May 27 03:24:39.172170 containerd[1560]: 2025-05-27 03:24:39.147 [INFO][4440] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.67/26] handle="k8s-pod-network.e2c47979994994b4d9188165ba51d83b2673c5503f35996453e95a63a45b02ee" host="ci-4344-0-0-e-876c439243" May 27 03:24:39.172170 containerd[1560]: 2025-05-27 03:24:39.147 [INFO][4440] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:39.172170 containerd[1560]: 2025-05-27 03:24:39.147 [INFO][4440] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.67/26] IPv6=[] ContainerID="e2c47979994994b4d9188165ba51d83b2673c5503f35996453e95a63a45b02ee" HandleID="k8s-pod-network.e2c47979994994b4d9188165ba51d83b2673c5503f35996453e95a63a45b02ee" Workload="ci--4344--0--0--e--876c439243-k8s-calico--apiserver--5d4cbf9b6--76kdv-eth0" May 27 03:24:39.172299 containerd[1560]: 2025-05-27 03:24:39.149 [INFO][4428] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e2c47979994994b4d9188165ba51d83b2673c5503f35996453e95a63a45b02ee" Namespace="calico-apiserver" Pod="calico-apiserver-5d4cbf9b6-76kdv" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-calico--apiserver--5d4cbf9b6--76kdv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--e--876c439243-k8s-calico--apiserver--5d4cbf9b6--76kdv-eth0", GenerateName:"calico-apiserver-5d4cbf9b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"5533be32-126b-4dc3-8312-b3c9f524d817", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 24, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d4cbf9b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-e-876c439243", ContainerID:"", Pod:"calico-apiserver-5d4cbf9b6-76kdv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3151643a486", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:39.172763 containerd[1560]: 2025-05-27 03:24:39.149 [INFO][4428] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.67/32] ContainerID="e2c47979994994b4d9188165ba51d83b2673c5503f35996453e95a63a45b02ee" Namespace="calico-apiserver" Pod="calico-apiserver-5d4cbf9b6-76kdv" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-calico--apiserver--5d4cbf9b6--76kdv-eth0" May 27 03:24:39.172763 containerd[1560]: 2025-05-27 03:24:39.149 [INFO][4428] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3151643a486 ContainerID="e2c47979994994b4d9188165ba51d83b2673c5503f35996453e95a63a45b02ee" Namespace="calico-apiserver" Pod="calico-apiserver-5d4cbf9b6-76kdv" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-calico--apiserver--5d4cbf9b6--76kdv-eth0" May 27 03:24:39.172763 containerd[1560]: 2025-05-27 03:24:39.152 [INFO][4428] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2c47979994994b4d9188165ba51d83b2673c5503f35996453e95a63a45b02ee" Namespace="calico-apiserver" Pod="calico-apiserver-5d4cbf9b6-76kdv" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-calico--apiserver--5d4cbf9b6--76kdv-eth0" May 27 03:24:39.172833 containerd[1560]: 2025-05-27 03:24:39.152 [INFO][4428] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e2c47979994994b4d9188165ba51d83b2673c5503f35996453e95a63a45b02ee" Namespace="calico-apiserver" Pod="calico-apiserver-5d4cbf9b6-76kdv" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-calico--apiserver--5d4cbf9b6--76kdv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--e--876c439243-k8s-calico--apiserver--5d4cbf9b6--76kdv-eth0", GenerateName:"calico-apiserver-5d4cbf9b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"5533be32-126b-4dc3-8312-b3c9f524d817", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 24, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d4cbf9b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-e-876c439243", ContainerID:"e2c47979994994b4d9188165ba51d83b2673c5503f35996453e95a63a45b02ee", Pod:"calico-apiserver-5d4cbf9b6-76kdv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3151643a486", MAC:"12:d0:4a:da:8c:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:39.172886 containerd[1560]: 2025-05-27 03:24:39.164 [INFO][4428] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e2c47979994994b4d9188165ba51d83b2673c5503f35996453e95a63a45b02ee" Namespace="calico-apiserver" Pod="calico-apiserver-5d4cbf9b6-76kdv" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-calico--apiserver--5d4cbf9b6--76kdv-eth0" May 27 03:24:39.195058 containerd[1560]: time="2025-05-27T03:24:39.194608821Z" level=info msg="connecting to shim e2c47979994994b4d9188165ba51d83b2673c5503f35996453e95a63a45b02ee" address="unix:///run/containerd/s/61bc140460ee3dfe8fe09ae30e59f1396ecd014d7c4e0ce4c9c0dd3ea09bcbf2" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:39.219499 systemd[1]: Started cri-containerd-e2c47979994994b4d9188165ba51d83b2673c5503f35996453e95a63a45b02ee.scope - libcontainer container e2c47979994994b4d9188165ba51d83b2673c5503f35996453e95a63a45b02ee. May 27 03:24:39.264239 kubelet[2917]: E0527 03:24:39.264199 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:24:39.274671 containerd[1560]: time="2025-05-27T03:24:39.274570078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d4cbf9b6-76kdv,Uid:5533be32-126b-4dc3-8312-b3c9f524d817,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e2c47979994994b4d9188165ba51d83b2673c5503f35996453e95a63a45b02ee\"" May 27 03:24:39.529615 systemd-networkd[1472]: vxlan.calico: Gained IPv6LL May 27 03:24:39.593632 systemd-networkd[1472]: cali1c479009d06: Gained IPv6LL May 27 03:24:40.021373 containerd[1560]: time="2025-05-27T03:24:40.020690147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2khsc,Uid:16daa161-3275-4d49-9e4c-ba4748828624,Namespace:calico-system,Attempt:0,}" May 27 03:24:40.022053 containerd[1560]: time="2025-05-27T03:24:40.022015104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d4cbf9b6-6l9vf,Uid:ca6512dc-3c30-4df1-a23d-c1010c560e07,Namespace:calico-apiserver,Attempt:0,}" May 27 03:24:40.201010 systemd-networkd[1472]: cali32fbb3f85c7: Link UP May 27 03:24:40.201987 systemd-networkd[1472]: cali32fbb3f85c7: Gained carrier May 27 03:24:40.219618 containerd[1560]: 2025-05-27 03:24:40.121 [INFO][4509] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--0--0--e--876c439243-k8s-calico--apiserver--5d4cbf9b6--6l9vf-eth0 calico-apiserver-5d4cbf9b6- calico-apiserver ca6512dc-3c30-4df1-a23d-c1010c560e07 810 0 2025-05-27 03:24:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d4cbf9b6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344-0-0-e-876c439243 calico-apiserver-5d4cbf9b6-6l9vf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali32fbb3f85c7 [] [] }} ContainerID="08e05df50c99994a57a3f1d3ca68d0faeac4047e6e21792ba52dd1f7c337fe63" Namespace="calico-apiserver" Pod="calico-apiserver-5d4cbf9b6-6l9vf" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-calico--apiserver--5d4cbf9b6--6l9vf-" May 27 03:24:40.219618 containerd[1560]: 2025-05-27 03:24:40.121 [INFO][4509] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="08e05df50c99994a57a3f1d3ca68d0faeac4047e6e21792ba52dd1f7c337fe63" Namespace="calico-apiserver" Pod="calico-apiserver-5d4cbf9b6-6l9vf" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-calico--apiserver--5d4cbf9b6--6l9vf-eth0" May 27 03:24:40.219618 containerd[1560]: 2025-05-27 03:24:40.162 [INFO][4528] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="08e05df50c99994a57a3f1d3ca68d0faeac4047e6e21792ba52dd1f7c337fe63" HandleID="k8s-pod-network.08e05df50c99994a57a3f1d3ca68d0faeac4047e6e21792ba52dd1f7c337fe63" Workload="ci--4344--0--0--e--876c439243-k8s-calico--apiserver--5d4cbf9b6--6l9vf-eth0" May 27 03:24:40.219845 containerd[1560]: 2025-05-27 03:24:40.162 [INFO][4528] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="08e05df50c99994a57a3f1d3ca68d0faeac4047e6e21792ba52dd1f7c337fe63" HandleID="k8s-pod-network.08e05df50c99994a57a3f1d3ca68d0faeac4047e6e21792ba52dd1f7c337fe63" Workload="ci--4344--0--0--e--876c439243-k8s-calico--apiserver--5d4cbf9b6--6l9vf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344-0-0-e-876c439243", "pod":"calico-apiserver-5d4cbf9b6-6l9vf", "timestamp":"2025-05-27 03:24:40.162547046 +0000 UTC"}, Hostname:"ci-4344-0-0-e-876c439243", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:40.219845 containerd[1560]: 2025-05-27 03:24:40.162 [INFO][4528] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:40.219845 containerd[1560]: 2025-05-27 03:24:40.163 [INFO][4528] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:40.219845 containerd[1560]: 2025-05-27 03:24:40.163 [INFO][4528] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-0-0-e-876c439243' May 27 03:24:40.219845 containerd[1560]: 2025-05-27 03:24:40.168 [INFO][4528] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.08e05df50c99994a57a3f1d3ca68d0faeac4047e6e21792ba52dd1f7c337fe63" host="ci-4344-0-0-e-876c439243" May 27 03:24:40.219845 containerd[1560]: 2025-05-27 03:24:40.173 [INFO][4528] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-0-0-e-876c439243" May 27 03:24:40.219845 containerd[1560]: 2025-05-27 03:24:40.177 [INFO][4528] ipam/ipam.go 511: Trying affinity for 192.168.93.64/26 host="ci-4344-0-0-e-876c439243" May 27 03:24:40.219845 containerd[1560]: 2025-05-27 03:24:40.178 [INFO][4528] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.64/26 host="ci-4344-0-0-e-876c439243" May 27 03:24:40.219845 containerd[1560]: 2025-05-27 03:24:40.182 [INFO][4528] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.64/26 host="ci-4344-0-0-e-876c439243" May 27 03:24:40.220041 containerd[1560]: 2025-05-27 03:24:40.182 [INFO][4528] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.93.64/26 handle="k8s-pod-network.08e05df50c99994a57a3f1d3ca68d0faeac4047e6e21792ba52dd1f7c337fe63" host="ci-4344-0-0-e-876c439243" May 27 03:24:40.220041 containerd[1560]: 2025-05-27 03:24:40.183 [INFO][4528] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.08e05df50c99994a57a3f1d3ca68d0faeac4047e6e21792ba52dd1f7c337fe63 May 27 03:24:40.220041 containerd[1560]: 2025-05-27 03:24:40.187 [INFO][4528] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.93.64/26 handle="k8s-pod-network.08e05df50c99994a57a3f1d3ca68d0faeac4047e6e21792ba52dd1f7c337fe63" host="ci-4344-0-0-e-876c439243" May 27 03:24:40.220041 containerd[1560]: 2025-05-27 03:24:40.192 [INFO][4528] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.93.68/26] block=192.168.93.64/26 handle="k8s-pod-network.08e05df50c99994a57a3f1d3ca68d0faeac4047e6e21792ba52dd1f7c337fe63" host="ci-4344-0-0-e-876c439243" May 27 03:24:40.220041 containerd[1560]: 2025-05-27 03:24:40.192 [INFO][4528] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.68/26] handle="k8s-pod-network.08e05df50c99994a57a3f1d3ca68d0faeac4047e6e21792ba52dd1f7c337fe63" host="ci-4344-0-0-e-876c439243" May 27 03:24:40.220041 containerd[1560]: 2025-05-27 03:24:40.192 [INFO][4528] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:40.220041 containerd[1560]: 2025-05-27 03:24:40.192 [INFO][4528] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.68/26] IPv6=[] ContainerID="08e05df50c99994a57a3f1d3ca68d0faeac4047e6e21792ba52dd1f7c337fe63" HandleID="k8s-pod-network.08e05df50c99994a57a3f1d3ca68d0faeac4047e6e21792ba52dd1f7c337fe63" Workload="ci--4344--0--0--e--876c439243-k8s-calico--apiserver--5d4cbf9b6--6l9vf-eth0" May 27 03:24:40.221525 containerd[1560]: 2025-05-27 03:24:40.194 [INFO][4509] cni-plugin/k8s.go 418: Populated endpoint ContainerID="08e05df50c99994a57a3f1d3ca68d0faeac4047e6e21792ba52dd1f7c337fe63" Namespace="calico-apiserver" Pod="calico-apiserver-5d4cbf9b6-6l9vf" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-calico--apiserver--5d4cbf9b6--6l9vf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--e--876c439243-k8s-calico--apiserver--5d4cbf9b6--6l9vf-eth0", GenerateName:"calico-apiserver-5d4cbf9b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"ca6512dc-3c30-4df1-a23d-c1010c560e07", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 24, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d4cbf9b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-e-876c439243", ContainerID:"", Pod:"calico-apiserver-5d4cbf9b6-6l9vf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali32fbb3f85c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:40.221589 containerd[1560]: 2025-05-27 03:24:40.194 [INFO][4509] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.68/32] ContainerID="08e05df50c99994a57a3f1d3ca68d0faeac4047e6e21792ba52dd1f7c337fe63" Namespace="calico-apiserver" Pod="calico-apiserver-5d4cbf9b6-6l9vf" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-calico--apiserver--5d4cbf9b6--6l9vf-eth0" May 27 03:24:40.221589 containerd[1560]: 2025-05-27 03:24:40.194 [INFO][4509] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32fbb3f85c7 ContainerID="08e05df50c99994a57a3f1d3ca68d0faeac4047e6e21792ba52dd1f7c337fe63" Namespace="calico-apiserver" Pod="calico-apiserver-5d4cbf9b6-6l9vf" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-calico--apiserver--5d4cbf9b6--6l9vf-eth0" May 27 03:24:40.221589 containerd[1560]: 2025-05-27 03:24:40.203 [INFO][4509] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="08e05df50c99994a57a3f1d3ca68d0faeac4047e6e21792ba52dd1f7c337fe63" Namespace="calico-apiserver" Pod="calico-apiserver-5d4cbf9b6-6l9vf" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-calico--apiserver--5d4cbf9b6--6l9vf-eth0" May 27 03:24:40.221646 containerd[1560]: 2025-05-27 03:24:40.204 [INFO][4509] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="08e05df50c99994a57a3f1d3ca68d0faeac4047e6e21792ba52dd1f7c337fe63" Namespace="calico-apiserver" Pod="calico-apiserver-5d4cbf9b6-6l9vf" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-calico--apiserver--5d4cbf9b6--6l9vf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--e--876c439243-k8s-calico--apiserver--5d4cbf9b6--6l9vf-eth0", GenerateName:"calico-apiserver-5d4cbf9b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"ca6512dc-3c30-4df1-a23d-c1010c560e07", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 24, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d4cbf9b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-e-876c439243", ContainerID:"08e05df50c99994a57a3f1d3ca68d0faeac4047e6e21792ba52dd1f7c337fe63", Pod:"calico-apiserver-5d4cbf9b6-6l9vf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali32fbb3f85c7", MAC:"7a:41:93:e8:5c:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:40.221695 containerd[1560]: 2025-05-27 03:24:40.217 [INFO][4509] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="08e05df50c99994a57a3f1d3ca68d0faeac4047e6e21792ba52dd1f7c337fe63" Namespace="calico-apiserver" Pod="calico-apiserver-5d4cbf9b6-6l9vf" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-calico--apiserver--5d4cbf9b6--6l9vf-eth0" May 27 03:24:40.254688 containerd[1560]: time="2025-05-27T03:24:40.254637397Z" level=info msg="connecting to shim 08e05df50c99994a57a3f1d3ca68d0faeac4047e6e21792ba52dd1f7c337fe63" address="unix:///run/containerd/s/27a5ec83cfb57f1ab140de5804ebd1540a9b9bead115dc2d7ef20128a0a0bd9e" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:40.286538 systemd[1]: Started cri-containerd-08e05df50c99994a57a3f1d3ca68d0faeac4047e6e21792ba52dd1f7c337fe63.scope - libcontainer container 08e05df50c99994a57a3f1d3ca68d0faeac4047e6e21792ba52dd1f7c337fe63. May 27 03:24:40.316966 systemd-networkd[1472]: cali5b64d5a32a0: Link UP May 27 03:24:40.317933 systemd-networkd[1472]: cali5b64d5a32a0: Gained carrier May 27 03:24:40.340498 containerd[1560]: 2025-05-27 03:24:40.121 [INFO][4504] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--0--0--e--876c439243-k8s-csi--node--driver--2khsc-eth0 csi-node-driver- calico-system 16daa161-3275-4d49-9e4c-ba4748828624 685 0 2025-05-27 03:24:15 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:68bf44dd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4344-0-0-e-876c439243 csi-node-driver-2khsc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5b64d5a32a0 [] [] }} ContainerID="c93ea7c16990791bf40083bdf097574eef712b5c937b00194110218c7ad799a9" Namespace="calico-system" Pod="csi-node-driver-2khsc" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-csi--node--driver--2khsc-" May 27 03:24:40.340498 containerd[1560]: 2025-05-27 03:24:40.121 [INFO][4504] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c93ea7c16990791bf40083bdf097574eef712b5c937b00194110218c7ad799a9" Namespace="calico-system" Pod="csi-node-driver-2khsc" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-csi--node--driver--2khsc-eth0" May 27 03:24:40.340498 containerd[1560]: 2025-05-27 03:24:40.167 [INFO][4533] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c93ea7c16990791bf40083bdf097574eef712b5c937b00194110218c7ad799a9" HandleID="k8s-pod-network.c93ea7c16990791bf40083bdf097574eef712b5c937b00194110218c7ad799a9" Workload="ci--4344--0--0--e--876c439243-k8s-csi--node--driver--2khsc-eth0" May 27 03:24:40.340746 containerd[1560]: 2025-05-27 03:24:40.167 [INFO][4533] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c93ea7c16990791bf40083bdf097574eef712b5c937b00194110218c7ad799a9" HandleID="k8s-pod-network.c93ea7c16990791bf40083bdf097574eef712b5c937b00194110218c7ad799a9" Workload="ci--4344--0--0--e--876c439243-k8s-csi--node--driver--2khsc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000233700), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344-0-0-e-876c439243", "pod":"csi-node-driver-2khsc", "timestamp":"2025-05-27 03:24:40.167317186 +0000 UTC"}, Hostname:"ci-4344-0-0-e-876c439243", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:40.340746 containerd[1560]: 2025-05-27 03:24:40.167 [INFO][4533] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:40.340746 containerd[1560]: 2025-05-27 03:24:40.192 [INFO][4533] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:40.340746 containerd[1560]: 2025-05-27 03:24:40.193 [INFO][4533] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-0-0-e-876c439243' May 27 03:24:40.340746 containerd[1560]: 2025-05-27 03:24:40.270 [INFO][4533] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c93ea7c16990791bf40083bdf097574eef712b5c937b00194110218c7ad799a9" host="ci-4344-0-0-e-876c439243" May 27 03:24:40.340746 containerd[1560]: 2025-05-27 03:24:40.277 [INFO][4533] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-0-0-e-876c439243" May 27 03:24:40.340746 containerd[1560]: 2025-05-27 03:24:40.283 [INFO][4533] ipam/ipam.go 511: Trying affinity for 192.168.93.64/26 host="ci-4344-0-0-e-876c439243" May 27 03:24:40.340746 containerd[1560]: 2025-05-27 03:24:40.286 [INFO][4533] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.64/26 host="ci-4344-0-0-e-876c439243" May 27 03:24:40.340746 containerd[1560]: 2025-05-27 03:24:40.289 [INFO][4533] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.64/26 host="ci-4344-0-0-e-876c439243" May 27 03:24:40.341014 containerd[1560]: 2025-05-27 03:24:40.290 [INFO][4533] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.93.64/26 handle="k8s-pod-network.c93ea7c16990791bf40083bdf097574eef712b5c937b00194110218c7ad799a9" host="ci-4344-0-0-e-876c439243" May 27 03:24:40.341014 containerd[1560]: 2025-05-27 03:24:40.291 [INFO][4533] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c93ea7c16990791bf40083bdf097574eef712b5c937b00194110218c7ad799a9 May 27 03:24:40.341014 containerd[1560]: 2025-05-27 03:24:40.301 [INFO][4533] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.93.64/26 handle="k8s-pod-network.c93ea7c16990791bf40083bdf097574eef712b5c937b00194110218c7ad799a9" host="ci-4344-0-0-e-876c439243" May 27 03:24:40.341014 containerd[1560]: 2025-05-27 03:24:40.310 [INFO][4533] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.93.69/26] block=192.168.93.64/26 handle="k8s-pod-network.c93ea7c16990791bf40083bdf097574eef712b5c937b00194110218c7ad799a9" host="ci-4344-0-0-e-876c439243" May 27 03:24:40.341014 containerd[1560]: 2025-05-27 03:24:40.310 [INFO][4533] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.69/26] handle="k8s-pod-network.c93ea7c16990791bf40083bdf097574eef712b5c937b00194110218c7ad799a9" host="ci-4344-0-0-e-876c439243" May 27 03:24:40.341014 containerd[1560]: 2025-05-27 03:24:40.310 [INFO][4533] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:40.341014 containerd[1560]: 2025-05-27 03:24:40.310 [INFO][4533] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.69/26] IPv6=[] ContainerID="c93ea7c16990791bf40083bdf097574eef712b5c937b00194110218c7ad799a9" HandleID="k8s-pod-network.c93ea7c16990791bf40083bdf097574eef712b5c937b00194110218c7ad799a9" Workload="ci--4344--0--0--e--876c439243-k8s-csi--node--driver--2khsc-eth0" May 27 03:24:40.341198 containerd[1560]: 2025-05-27 03:24:40.313 [INFO][4504] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c93ea7c16990791bf40083bdf097574eef712b5c937b00194110218c7ad799a9" Namespace="calico-system" Pod="csi-node-driver-2khsc" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-csi--node--driver--2khsc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--e--876c439243-k8s-csi--node--driver--2khsc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"16daa161-3275-4d49-9e4c-ba4748828624", ResourceVersion:"685", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 24, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-e-876c439243", ContainerID:"", Pod:"csi-node-driver-2khsc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.93.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5b64d5a32a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:40.341275 containerd[1560]: 2025-05-27 03:24:40.313 [INFO][4504] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.69/32] ContainerID="c93ea7c16990791bf40083bdf097574eef712b5c937b00194110218c7ad799a9" Namespace="calico-system" Pod="csi-node-driver-2khsc" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-csi--node--driver--2khsc-eth0" May 27 03:24:40.341275 containerd[1560]: 2025-05-27 03:24:40.314 [INFO][4504] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b64d5a32a0 ContainerID="c93ea7c16990791bf40083bdf097574eef712b5c937b00194110218c7ad799a9" Namespace="calico-system" Pod="csi-node-driver-2khsc" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-csi--node--driver--2khsc-eth0" May 27 03:24:40.341275 containerd[1560]: 2025-05-27 03:24:40.315 [INFO][4504] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c93ea7c16990791bf40083bdf097574eef712b5c937b00194110218c7ad799a9" Namespace="calico-system" Pod="csi-node-driver-2khsc" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-csi--node--driver--2khsc-eth0" May 27 03:24:40.341397 containerd[1560]: 2025-05-27 03:24:40.316 [INFO][4504] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c93ea7c16990791bf40083bdf097574eef712b5c937b00194110218c7ad799a9" Namespace="calico-system" Pod="csi-node-driver-2khsc" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-csi--node--driver--2khsc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--e--876c439243-k8s-csi--node--driver--2khsc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"16daa161-3275-4d49-9e4c-ba4748828624", ResourceVersion:"685", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 24, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-e-876c439243", ContainerID:"c93ea7c16990791bf40083bdf097574eef712b5c937b00194110218c7ad799a9", Pod:"csi-node-driver-2khsc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.93.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5b64d5a32a0", MAC:"96:9d:be:63:70:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:40.341472 containerd[1560]: 2025-05-27 03:24:40.330 [INFO][4504] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c93ea7c16990791bf40083bdf097574eef712b5c937b00194110218c7ad799a9" Namespace="calico-system" Pod="csi-node-driver-2khsc" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-csi--node--driver--2khsc-eth0" May 27 03:24:40.379509 containerd[1560]: time="2025-05-27T03:24:40.379283983Z" level=info msg="connecting to shim c93ea7c16990791bf40083bdf097574eef712b5c937b00194110218c7ad799a9" address="unix:///run/containerd/s/de714204548721429c6a6604b32b42ccba45ffce5e156987f9a49c0c3c1a42e9" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:40.380440 containerd[1560]: time="2025-05-27T03:24:40.380035243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d4cbf9b6-6l9vf,Uid:ca6512dc-3c30-4df1-a23d-c1010c560e07,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"08e05df50c99994a57a3f1d3ca68d0faeac4047e6e21792ba52dd1f7c337fe63\"" May 27 03:24:40.403485 systemd[1]: Started cri-containerd-c93ea7c16990791bf40083bdf097574eef712b5c937b00194110218c7ad799a9.scope - libcontainer container c93ea7c16990791bf40083bdf097574eef712b5c937b00194110218c7ad799a9. May 27 03:24:40.425956 systemd-networkd[1472]: cali3151643a486: Gained IPv6LL May 27 03:24:40.438015 containerd[1560]: time="2025-05-27T03:24:40.437981118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2khsc,Uid:16daa161-3275-4d49-9e4c-ba4748828624,Namespace:calico-system,Attempt:0,} returns sandbox id \"c93ea7c16990791bf40083bdf097574eef712b5c937b00194110218c7ad799a9\"" May 27 03:24:41.020386 containerd[1560]: time="2025-05-27T03:24:41.020147259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-xwqrr,Uid:9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed,Namespace:calico-system,Attempt:0,}" May 27 03:24:41.020386 containerd[1560]: time="2025-05-27T03:24:41.020210237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qn2bv,Uid:1f041155-bcd1-48dc-8d60-b341452c38cc,Namespace:kube-system,Attempt:0,}" May 27 03:24:41.172572 systemd-networkd[1472]: cali86d10550381: Link UP May 27 03:24:41.175128 systemd-networkd[1472]: cali86d10550381: Gained carrier May 27 03:24:41.203099 containerd[1560]: 2025-05-27 03:24:41.075 [INFO][4655] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--0--0--e--876c439243-k8s-coredns--7c65d6cfc9--qn2bv-eth0 coredns-7c65d6cfc9- kube-system 1f041155-bcd1-48dc-8d60-b341452c38cc 811 0 2025-05-27 03:24:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344-0-0-e-876c439243 coredns-7c65d6cfc9-qn2bv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali86d10550381 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4c52555b352e777aa279f0596dccf56c8dc75ccfd97f66b5486fecae7b784fa7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qn2bv" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-coredns--7c65d6cfc9--qn2bv-" May 27 03:24:41.203099 containerd[1560]: 2025-05-27 03:24:41.075 [INFO][4655] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c52555b352e777aa279f0596dccf56c8dc75ccfd97f66b5486fecae7b784fa7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qn2bv" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-coredns--7c65d6cfc9--qn2bv-eth0" May 27 03:24:41.203099 containerd[1560]: 2025-05-27 03:24:41.115 [INFO][4675] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c52555b352e777aa279f0596dccf56c8dc75ccfd97f66b5486fecae7b784fa7" HandleID="k8s-pod-network.4c52555b352e777aa279f0596dccf56c8dc75ccfd97f66b5486fecae7b784fa7" Workload="ci--4344--0--0--e--876c439243-k8s-coredns--7c65d6cfc9--qn2bv-eth0" May 27 03:24:41.203534 containerd[1560]: 2025-05-27 03:24:41.115 [INFO][4675] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4c52555b352e777aa279f0596dccf56c8dc75ccfd97f66b5486fecae7b784fa7" HandleID="k8s-pod-network.4c52555b352e777aa279f0596dccf56c8dc75ccfd97f66b5486fecae7b784fa7" Workload="ci--4344--0--0--e--876c439243-k8s-coredns--7c65d6cfc9--qn2bv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000233850), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344-0-0-e-876c439243", "pod":"coredns-7c65d6cfc9-qn2bv", "timestamp":"2025-05-27 03:24:41.115346691 +0000 UTC"}, Hostname:"ci-4344-0-0-e-876c439243", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:41.203534 containerd[1560]: 2025-05-27 03:24:41.116 [INFO][4675] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:41.203534 containerd[1560]: 2025-05-27 03:24:41.116 [INFO][4675] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:41.203534 containerd[1560]: 2025-05-27 03:24:41.116 [INFO][4675] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-0-0-e-876c439243' May 27 03:24:41.203534 containerd[1560]: 2025-05-27 03:24:41.127 [INFO][4675] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4c52555b352e777aa279f0596dccf56c8dc75ccfd97f66b5486fecae7b784fa7" host="ci-4344-0-0-e-876c439243" May 27 03:24:41.203534 containerd[1560]: 2025-05-27 03:24:41.133 [INFO][4675] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-0-0-e-876c439243" May 27 03:24:41.203534 containerd[1560]: 2025-05-27 03:24:41.139 [INFO][4675] ipam/ipam.go 511: Trying affinity for 192.168.93.64/26 host="ci-4344-0-0-e-876c439243" May 27 03:24:41.203534 containerd[1560]: 2025-05-27 03:24:41.141 [INFO][4675] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.64/26 host="ci-4344-0-0-e-876c439243" May 27 03:24:41.203534 containerd[1560]: 2025-05-27 03:24:41.144 [INFO][4675] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.64/26 host="ci-4344-0-0-e-876c439243" May 27 03:24:41.203710 containerd[1560]: 2025-05-27 03:24:41.144 [INFO][4675] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.93.64/26 handle="k8s-pod-network.4c52555b352e777aa279f0596dccf56c8dc75ccfd97f66b5486fecae7b784fa7" host="ci-4344-0-0-e-876c439243" May 27 03:24:41.203710 containerd[1560]: 2025-05-27 03:24:41.146 [INFO][4675] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4c52555b352e777aa279f0596dccf56c8dc75ccfd97f66b5486fecae7b784fa7 May 27 03:24:41.203710 containerd[1560]: 2025-05-27 03:24:41.150 [INFO][4675] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.93.64/26 handle="k8s-pod-network.4c52555b352e777aa279f0596dccf56c8dc75ccfd97f66b5486fecae7b784fa7" host="ci-4344-0-0-e-876c439243" May 27 03:24:41.203710 containerd[1560]: 2025-05-27 03:24:41.159 [INFO][4675] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.93.70/26] block=192.168.93.64/26 handle="k8s-pod-network.4c52555b352e777aa279f0596dccf56c8dc75ccfd97f66b5486fecae7b784fa7" host="ci-4344-0-0-e-876c439243" May 27 03:24:41.203710 containerd[1560]: 2025-05-27 03:24:41.159 [INFO][4675] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.70/26] handle="k8s-pod-network.4c52555b352e777aa279f0596dccf56c8dc75ccfd97f66b5486fecae7b784fa7" host="ci-4344-0-0-e-876c439243" May 27 03:24:41.203710 containerd[1560]: 2025-05-27 03:24:41.159 [INFO][4675] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:41.203710 containerd[1560]: 2025-05-27 03:24:41.159 [INFO][4675] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.70/26] IPv6=[] ContainerID="4c52555b352e777aa279f0596dccf56c8dc75ccfd97f66b5486fecae7b784fa7" HandleID="k8s-pod-network.4c52555b352e777aa279f0596dccf56c8dc75ccfd97f66b5486fecae7b784fa7" Workload="ci--4344--0--0--e--876c439243-k8s-coredns--7c65d6cfc9--qn2bv-eth0" May 27 03:24:41.203837 containerd[1560]: 2025-05-27 03:24:41.163 [INFO][4655] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c52555b352e777aa279f0596dccf56c8dc75ccfd97f66b5486fecae7b784fa7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qn2bv" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-coredns--7c65d6cfc9--qn2bv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--e--876c439243-k8s-coredns--7c65d6cfc9--qn2bv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1f041155-bcd1-48dc-8d60-b341452c38cc", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 24, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-e-876c439243", ContainerID:"", Pod:"coredns-7c65d6cfc9-qn2bv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86d10550381", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:41.203837 containerd[1560]: 2025-05-27 03:24:41.163 [INFO][4655] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.70/32] ContainerID="4c52555b352e777aa279f0596dccf56c8dc75ccfd97f66b5486fecae7b784fa7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qn2bv" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-coredns--7c65d6cfc9--qn2bv-eth0" May 27 03:24:41.203837 containerd[1560]: 2025-05-27 03:24:41.163 [INFO][4655] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali86d10550381 ContainerID="4c52555b352e777aa279f0596dccf56c8dc75ccfd97f66b5486fecae7b784fa7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qn2bv" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-coredns--7c65d6cfc9--qn2bv-eth0" May 27 03:24:41.203837 containerd[1560]: 2025-05-27 03:24:41.177 [INFO][4655] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c52555b352e777aa279f0596dccf56c8dc75ccfd97f66b5486fecae7b784fa7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qn2bv" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-coredns--7c65d6cfc9--qn2bv-eth0" May 27 03:24:41.203837 containerd[1560]: 2025-05-27 03:24:41.179 [INFO][4655] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c52555b352e777aa279f0596dccf56c8dc75ccfd97f66b5486fecae7b784fa7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qn2bv" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-coredns--7c65d6cfc9--qn2bv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--e--876c439243-k8s-coredns--7c65d6cfc9--qn2bv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1f041155-bcd1-48dc-8d60-b341452c38cc", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 24, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-e-876c439243", ContainerID:"4c52555b352e777aa279f0596dccf56c8dc75ccfd97f66b5486fecae7b784fa7", Pod:"coredns-7c65d6cfc9-qn2bv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86d10550381", MAC:"22:21:5e:f8:99:e4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:41.203837 containerd[1560]: 2025-05-27 03:24:41.200 [INFO][4655] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c52555b352e777aa279f0596dccf56c8dc75ccfd97f66b5486fecae7b784fa7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qn2bv" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-coredns--7c65d6cfc9--qn2bv-eth0" May 27 03:24:41.284484 containerd[1560]: time="2025-05-27T03:24:41.284081044Z" level=info msg="connecting to shim 4c52555b352e777aa279f0596dccf56c8dc75ccfd97f66b5486fecae7b784fa7" address="unix:///run/containerd/s/a111949b1c7527c0a66f230530a92d9f5e449480029c55bb56dcf784db554085" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:41.317442 systemd-networkd[1472]: cali7570fc0cf31: Link UP May 27 03:24:41.317948 systemd-networkd[1472]: cali7570fc0cf31: Gained carrier May 27 03:24:41.343161 systemd[1]: Started cri-containerd-4c52555b352e777aa279f0596dccf56c8dc75ccfd97f66b5486fecae7b784fa7.scope - libcontainer container 4c52555b352e777aa279f0596dccf56c8dc75ccfd97f66b5486fecae7b784fa7. May 27 03:24:41.351640 containerd[1560]: 2025-05-27 03:24:41.076 [INFO][4650] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--0--0--e--876c439243-k8s-goldmane--8f77d7b6c--xwqrr-eth0 goldmane-8f77d7b6c- calico-system 9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed 808 0 2025-05-27 03:24:14 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:8f77d7b6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4344-0-0-e-876c439243 goldmane-8f77d7b6c-xwqrr eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7570fc0cf31 [] [] }} ContainerID="929a407c98c6f921318078db51fcdb057ed2bf3be9510d94d75e5600fefea35a" Namespace="calico-system" Pod="goldmane-8f77d7b6c-xwqrr" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-goldmane--8f77d7b6c--xwqrr-" May 27 03:24:41.351640 containerd[1560]: 2025-05-27 03:24:41.076 [INFO][4650] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="929a407c98c6f921318078db51fcdb057ed2bf3be9510d94d75e5600fefea35a" Namespace="calico-system" Pod="goldmane-8f77d7b6c-xwqrr" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-goldmane--8f77d7b6c--xwqrr-eth0" May 27 03:24:41.351640 containerd[1560]: 2025-05-27 03:24:41.122 [INFO][4677] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="929a407c98c6f921318078db51fcdb057ed2bf3be9510d94d75e5600fefea35a" HandleID="k8s-pod-network.929a407c98c6f921318078db51fcdb057ed2bf3be9510d94d75e5600fefea35a" Workload="ci--4344--0--0--e--876c439243-k8s-goldmane--8f77d7b6c--xwqrr-eth0" May 27 03:24:41.351640 containerd[1560]: 2025-05-27 03:24:41.122 [INFO][4677] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="929a407c98c6f921318078db51fcdb057ed2bf3be9510d94d75e5600fefea35a" HandleID="k8s-pod-network.929a407c98c6f921318078db51fcdb057ed2bf3be9510d94d75e5600fefea35a" Workload="ci--4344--0--0--e--876c439243-k8s-goldmane--8f77d7b6c--xwqrr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000233240), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344-0-0-e-876c439243", "pod":"goldmane-8f77d7b6c-xwqrr", "timestamp":"2025-05-27 03:24:41.122459527 +0000 UTC"}, Hostname:"ci-4344-0-0-e-876c439243", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:41.351640 containerd[1560]: 2025-05-27 03:24:41.122 [INFO][4677] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:41.351640 containerd[1560]: 2025-05-27 03:24:41.159 [INFO][4677] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:41.351640 containerd[1560]: 2025-05-27 03:24:41.159 [INFO][4677] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-0-0-e-876c439243' May 27 03:24:41.351640 containerd[1560]: 2025-05-27 03:24:41.228 [INFO][4677] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.929a407c98c6f921318078db51fcdb057ed2bf3be9510d94d75e5600fefea35a" host="ci-4344-0-0-e-876c439243" May 27 03:24:41.351640 containerd[1560]: 2025-05-27 03:24:41.243 [INFO][4677] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-0-0-e-876c439243" May 27 03:24:41.351640 containerd[1560]: 2025-05-27 03:24:41.264 [INFO][4677] ipam/ipam.go 511: Trying affinity for 192.168.93.64/26 host="ci-4344-0-0-e-876c439243" May 27 03:24:41.351640 containerd[1560]: 2025-05-27 03:24:41.271 [INFO][4677] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.64/26 host="ci-4344-0-0-e-876c439243" May 27 03:24:41.351640 containerd[1560]: 2025-05-27 03:24:41.275 [INFO][4677] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.64/26 host="ci-4344-0-0-e-876c439243" May 27 03:24:41.351640 containerd[1560]: 2025-05-27 03:24:41.275 [INFO][4677] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.93.64/26 handle="k8s-pod-network.929a407c98c6f921318078db51fcdb057ed2bf3be9510d94d75e5600fefea35a" host="ci-4344-0-0-e-876c439243" May 27 03:24:41.351640 containerd[1560]: 2025-05-27 03:24:41.278 [INFO][4677] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.929a407c98c6f921318078db51fcdb057ed2bf3be9510d94d75e5600fefea35a May 27 03:24:41.351640 containerd[1560]: 2025-05-27 03:24:41.289 [INFO][4677] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.93.64/26 handle="k8s-pod-network.929a407c98c6f921318078db51fcdb057ed2bf3be9510d94d75e5600fefea35a" host="ci-4344-0-0-e-876c439243" May 27 03:24:41.351640 containerd[1560]: 2025-05-27 03:24:41.302 [INFO][4677] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.93.71/26] block=192.168.93.64/26 handle="k8s-pod-network.929a407c98c6f921318078db51fcdb057ed2bf3be9510d94d75e5600fefea35a" host="ci-4344-0-0-e-876c439243" May 27 03:24:41.351640 containerd[1560]: 2025-05-27 03:24:41.302 [INFO][4677] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.71/26] handle="k8s-pod-network.929a407c98c6f921318078db51fcdb057ed2bf3be9510d94d75e5600fefea35a" host="ci-4344-0-0-e-876c439243" May 27 03:24:41.351640 containerd[1560]: 2025-05-27 03:24:41.302 [INFO][4677] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:41.351640 containerd[1560]: 2025-05-27 03:24:41.302 [INFO][4677] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.71/26] IPv6=[] ContainerID="929a407c98c6f921318078db51fcdb057ed2bf3be9510d94d75e5600fefea35a" HandleID="k8s-pod-network.929a407c98c6f921318078db51fcdb057ed2bf3be9510d94d75e5600fefea35a" Workload="ci--4344--0--0--e--876c439243-k8s-goldmane--8f77d7b6c--xwqrr-eth0" May 27 03:24:41.352968 containerd[1560]: 2025-05-27 03:24:41.312 [INFO][4650] cni-plugin/k8s.go 418: Populated endpoint ContainerID="929a407c98c6f921318078db51fcdb057ed2bf3be9510d94d75e5600fefea35a" Namespace="calico-system" Pod="goldmane-8f77d7b6c-xwqrr" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-goldmane--8f77d7b6c--xwqrr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--e--876c439243-k8s-goldmane--8f77d7b6c--xwqrr-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 24, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-e-876c439243", ContainerID:"", Pod:"goldmane-8f77d7b6c-xwqrr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.93.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7570fc0cf31", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:41.352968 containerd[1560]: 2025-05-27 03:24:41.312 [INFO][4650] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.71/32] ContainerID="929a407c98c6f921318078db51fcdb057ed2bf3be9510d94d75e5600fefea35a" Namespace="calico-system" Pod="goldmane-8f77d7b6c-xwqrr" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-goldmane--8f77d7b6c--xwqrr-eth0" May 27 03:24:41.352968 containerd[1560]: 2025-05-27 03:24:41.312 [INFO][4650] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7570fc0cf31 ContainerID="929a407c98c6f921318078db51fcdb057ed2bf3be9510d94d75e5600fefea35a" Namespace="calico-system" Pod="goldmane-8f77d7b6c-xwqrr" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-goldmane--8f77d7b6c--xwqrr-eth0" May 27 03:24:41.352968 containerd[1560]: 2025-05-27 03:24:41.319 [INFO][4650] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="929a407c98c6f921318078db51fcdb057ed2bf3be9510d94d75e5600fefea35a" Namespace="calico-system" Pod="goldmane-8f77d7b6c-xwqrr" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-goldmane--8f77d7b6c--xwqrr-eth0" May 27 03:24:41.352968 containerd[1560]: 2025-05-27 03:24:41.323 [INFO][4650] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="929a407c98c6f921318078db51fcdb057ed2bf3be9510d94d75e5600fefea35a" Namespace="calico-system" Pod="goldmane-8f77d7b6c-xwqrr" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-goldmane--8f77d7b6c--xwqrr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--e--876c439243-k8s-goldmane--8f77d7b6c--xwqrr-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 24, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-e-876c439243", ContainerID:"929a407c98c6f921318078db51fcdb057ed2bf3be9510d94d75e5600fefea35a", Pod:"goldmane-8f77d7b6c-xwqrr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.93.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7570fc0cf31", MAC:"6e:36:ae:2c:f9:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:41.352968 containerd[1560]: 2025-05-27 03:24:41.349 [INFO][4650] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="929a407c98c6f921318078db51fcdb057ed2bf3be9510d94d75e5600fefea35a" Namespace="calico-system" Pod="goldmane-8f77d7b6c-xwqrr" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-goldmane--8f77d7b6c--xwqrr-eth0" May 27 03:24:41.392650 containerd[1560]: time="2025-05-27T03:24:41.392589138Z" level=info msg="connecting to shim 929a407c98c6f921318078db51fcdb057ed2bf3be9510d94d75e5600fefea35a" address="unix:///run/containerd/s/8d1f12986fefbcf82029e19d2e8b767dd51be28fab471cce053328a8b0338bc0" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:41.444264 containerd[1560]: time="2025-05-27T03:24:41.444221973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qn2bv,Uid:1f041155-bcd1-48dc-8d60-b341452c38cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c52555b352e777aa279f0596dccf56c8dc75ccfd97f66b5486fecae7b784fa7\"" May 27 03:24:41.449615 systemd-networkd[1472]: cali32fbb3f85c7: Gained IPv6LL May 27 03:24:41.452582 systemd[1]: Started cri-containerd-929a407c98c6f921318078db51fcdb057ed2bf3be9510d94d75e5600fefea35a.scope - libcontainer container 929a407c98c6f921318078db51fcdb057ed2bf3be9510d94d75e5600fefea35a. May 27 03:24:41.456429 containerd[1560]: time="2025-05-27T03:24:41.456367291Z" level=info msg="CreateContainer within sandbox \"4c52555b352e777aa279f0596dccf56c8dc75ccfd97f66b5486fecae7b784fa7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 03:24:41.506032 containerd[1560]: time="2025-05-27T03:24:41.505999311Z" level=info msg="Container 02dc94b802c0e58c670018403916dbf0478025b84cbf00931d3deaf215782fb7: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:41.520250 containerd[1560]: time="2025-05-27T03:24:41.520213152Z" level=info msg="CreateContainer within sandbox \"4c52555b352e777aa279f0596dccf56c8dc75ccfd97f66b5486fecae7b784fa7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"02dc94b802c0e58c670018403916dbf0478025b84cbf00931d3deaf215782fb7\"" May 27 03:24:41.522145 containerd[1560]: time="2025-05-27T03:24:41.522025362Z" level=info msg="StartContainer for \"02dc94b802c0e58c670018403916dbf0478025b84cbf00931d3deaf215782fb7\"" May 27 03:24:41.523582 containerd[1560]: time="2025-05-27T03:24:41.523462709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-xwqrr,Uid:9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed,Namespace:calico-system,Attempt:0,} returns sandbox id \"929a407c98c6f921318078db51fcdb057ed2bf3be9510d94d75e5600fefea35a\"" May 27 03:24:41.526533 containerd[1560]: time="2025-05-27T03:24:41.526462998Z" level=info msg="connecting to shim 02dc94b802c0e58c670018403916dbf0478025b84cbf00931d3deaf215782fb7" address="unix:///run/containerd/s/a111949b1c7527c0a66f230530a92d9f5e449480029c55bb56dcf784db554085" protocol=ttrpc version=3 May 27 03:24:41.549502 systemd[1]: Started cri-containerd-02dc94b802c0e58c670018403916dbf0478025b84cbf00931d3deaf215782fb7.scope - libcontainer container 02dc94b802c0e58c670018403916dbf0478025b84cbf00931d3deaf215782fb7. May 27 03:24:41.606221 containerd[1560]: time="2025-05-27T03:24:41.606148291Z" level=info msg="StartContainer for \"02dc94b802c0e58c670018403916dbf0478025b84cbf00931d3deaf215782fb7\" returns successfully" May 27 03:24:41.897670 systemd-networkd[1472]: cali5b64d5a32a0: Gained IPv6LL May 27 03:24:42.172237 containerd[1560]: time="2025-05-27T03:24:42.172081490Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:42.173405 containerd[1560]: time="2025-05-27T03:24:42.173188326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 27 03:24:42.175044 containerd[1560]: time="2025-05-27T03:24:42.175000899Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:42.177653 containerd[1560]: time="2025-05-27T03:24:42.177587271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:42.178145 containerd[1560]: time="2025-05-27T03:24:42.178124609Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 3.840355095s" May 27 03:24:42.178218 containerd[1560]: time="2025-05-27T03:24:42.178207375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 27 03:24:42.185170 containerd[1560]: time="2025-05-27T03:24:42.185100870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 03:24:42.215041 containerd[1560]: time="2025-05-27T03:24:42.214957415Z" level=info msg="CreateContainer within sandbox \"bf600c6bf6c300fd986e2592eaaea620bc22e474bb01b4ce18013673794d700b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 27 03:24:42.223235 containerd[1560]: time="2025-05-27T03:24:42.222692470Z" level=info msg="Container 6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:42.230368 containerd[1560]: time="2025-05-27T03:24:42.230292752Z" level=info msg="CreateContainer within sandbox \"bf600c6bf6c300fd986e2592eaaea620bc22e474bb01b4ce18013673794d700b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\"" May 27 03:24:42.231670 containerd[1560]: time="2025-05-27T03:24:42.231614913Z" level=info msg="StartContainer for \"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\"" May 27 03:24:42.233819 containerd[1560]: time="2025-05-27T03:24:42.233774545Z" level=info msg="connecting to shim 6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4" address="unix:///run/containerd/s/5e18b9e20c41e6fd0433a7505ade30adc5d70abc7131ddb8ab89d390fa3357f0" protocol=ttrpc version=3 May 27 03:24:42.323133 systemd[1]: Started cri-containerd-6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4.scope - libcontainer container 6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4. May 27 03:24:42.341087 kubelet[2917]: I0527 03:24:42.306228 2917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-qn2bv" podStartSLOduration=42.306025883 podStartE2EDuration="42.306025883s" podCreationTimestamp="2025-05-27 03:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:24:42.304675198 +0000 UTC m=+48.416475836" watchObservedRunningTime="2025-05-27 03:24:42.306025883 +0000 UTC m=+48.417826511" May 27 03:24:42.410718 systemd-networkd[1472]: cali86d10550381: Gained IPv6LL May 27 03:24:42.434005 containerd[1560]: time="2025-05-27T03:24:42.433855612Z" level=info msg="StartContainer for \"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" returns successfully" May 27 03:24:42.985470 systemd-networkd[1472]: cali7570fc0cf31: Gained IPv6LL May 27 03:24:43.019939 containerd[1560]: time="2025-05-27T03:24:43.019894191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2w9m9,Uid:ad1f604e-228a-46e6-8d84-3e756063a5a6,Namespace:kube-system,Attempt:0,}" May 27 03:24:43.177731 systemd-networkd[1472]: cali495df50c3a6: Link UP May 27 03:24:43.178752 systemd-networkd[1472]: cali495df50c3a6: Gained carrier May 27 03:24:43.199627 containerd[1560]: 2025-05-27 03:24:43.072 [INFO][4886] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--0--0--e--876c439243-k8s-coredns--7c65d6cfc9--2w9m9-eth0 coredns-7c65d6cfc9- kube-system ad1f604e-228a-46e6-8d84-3e756063a5a6 805 0 2025-05-27 03:24:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344-0-0-e-876c439243 coredns-7c65d6cfc9-2w9m9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali495df50c3a6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b4294c0b731ae014ec29673225aec9b00ec6aeac4da42dd5f9403c633217df3e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2w9m9" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-coredns--7c65d6cfc9--2w9m9-" May 27 03:24:43.199627 containerd[1560]: 2025-05-27 03:24:43.072 [INFO][4886] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b4294c0b731ae014ec29673225aec9b00ec6aeac4da42dd5f9403c633217df3e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2w9m9" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-coredns--7c65d6cfc9--2w9m9-eth0" May 27 03:24:43.199627 containerd[1560]: 2025-05-27 03:24:43.112 [INFO][4899] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4294c0b731ae014ec29673225aec9b00ec6aeac4da42dd5f9403c633217df3e" HandleID="k8s-pod-network.b4294c0b731ae014ec29673225aec9b00ec6aeac4da42dd5f9403c633217df3e" Workload="ci--4344--0--0--e--876c439243-k8s-coredns--7c65d6cfc9--2w9m9-eth0" May 27 03:24:43.199627 containerd[1560]: 2025-05-27 03:24:43.112 [INFO][4899] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b4294c0b731ae014ec29673225aec9b00ec6aeac4da42dd5f9403c633217df3e" HandleID="k8s-pod-network.b4294c0b731ae014ec29673225aec9b00ec6aeac4da42dd5f9403c633217df3e" Workload="ci--4344--0--0--e--876c439243-k8s-coredns--7c65d6cfc9--2w9m9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9180), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344-0-0-e-876c439243", "pod":"coredns-7c65d6cfc9-2w9m9", "timestamp":"2025-05-27 03:24:43.112005201 +0000 UTC"}, Hostname:"ci-4344-0-0-e-876c439243", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:43.199627 containerd[1560]: 2025-05-27 03:24:43.112 [INFO][4899] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:43.199627 containerd[1560]: 2025-05-27 03:24:43.112 [INFO][4899] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:43.199627 containerd[1560]: 2025-05-27 03:24:43.112 [INFO][4899] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-0-0-e-876c439243' May 27 03:24:43.199627 containerd[1560]: 2025-05-27 03:24:43.121 [INFO][4899] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b4294c0b731ae014ec29673225aec9b00ec6aeac4da42dd5f9403c633217df3e" host="ci-4344-0-0-e-876c439243" May 27 03:24:43.199627 containerd[1560]: 2025-05-27 03:24:43.129 [INFO][4899] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-0-0-e-876c439243" May 27 03:24:43.199627 containerd[1560]: 2025-05-27 03:24:43.134 [INFO][4899] ipam/ipam.go 511: Trying affinity for 192.168.93.64/26 host="ci-4344-0-0-e-876c439243" May 27 03:24:43.199627 containerd[1560]: 2025-05-27 03:24:43.136 [INFO][4899] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.64/26 host="ci-4344-0-0-e-876c439243" May 27 03:24:43.199627 containerd[1560]: 2025-05-27 03:24:43.140 [INFO][4899] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.64/26 host="ci-4344-0-0-e-876c439243" May 27 03:24:43.199627 containerd[1560]: 2025-05-27 03:24:43.140 [INFO][4899] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.93.64/26 handle="k8s-pod-network.b4294c0b731ae014ec29673225aec9b00ec6aeac4da42dd5f9403c633217df3e" host="ci-4344-0-0-e-876c439243" May 27 03:24:43.199627 containerd[1560]: 2025-05-27 03:24:43.142 [INFO][4899] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b4294c0b731ae014ec29673225aec9b00ec6aeac4da42dd5f9403c633217df3e May 27 03:24:43.199627 containerd[1560]: 2025-05-27 03:24:43.152 [INFO][4899] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.93.64/26 handle="k8s-pod-network.b4294c0b731ae014ec29673225aec9b00ec6aeac4da42dd5f9403c633217df3e" host="ci-4344-0-0-e-876c439243" May 27 03:24:43.199627 containerd[1560]: 2025-05-27 03:24:43.161 [INFO][4899] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.93.72/26] block=192.168.93.64/26 handle="k8s-pod-network.b4294c0b731ae014ec29673225aec9b00ec6aeac4da42dd5f9403c633217df3e" host="ci-4344-0-0-e-876c439243" May 27 03:24:43.199627 containerd[1560]: 2025-05-27 03:24:43.161 [INFO][4899] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.72/26] handle="k8s-pod-network.b4294c0b731ae014ec29673225aec9b00ec6aeac4da42dd5f9403c633217df3e" host="ci-4344-0-0-e-876c439243" May 27 03:24:43.199627 containerd[1560]: 2025-05-27 03:24:43.161 [INFO][4899] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:43.199627 containerd[1560]: 2025-05-27 03:24:43.162 [INFO][4899] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.72/26] IPv6=[] ContainerID="b4294c0b731ae014ec29673225aec9b00ec6aeac4da42dd5f9403c633217df3e" HandleID="k8s-pod-network.b4294c0b731ae014ec29673225aec9b00ec6aeac4da42dd5f9403c633217df3e" Workload="ci--4344--0--0--e--876c439243-k8s-coredns--7c65d6cfc9--2w9m9-eth0" May 27 03:24:43.205375 containerd[1560]: 2025-05-27 03:24:43.168 [INFO][4886] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b4294c0b731ae014ec29673225aec9b00ec6aeac4da42dd5f9403c633217df3e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2w9m9" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-coredns--7c65d6cfc9--2w9m9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--e--876c439243-k8s-coredns--7c65d6cfc9--2w9m9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ad1f604e-228a-46e6-8d84-3e756063a5a6", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 24, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-e-876c439243", ContainerID:"", Pod:"coredns-7c65d6cfc9-2w9m9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali495df50c3a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:43.205375 containerd[1560]: 2025-05-27 03:24:43.169 [INFO][4886] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.72/32] ContainerID="b4294c0b731ae014ec29673225aec9b00ec6aeac4da42dd5f9403c633217df3e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2w9m9" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-coredns--7c65d6cfc9--2w9m9-eth0" May 27 03:24:43.205375 containerd[1560]: 2025-05-27 03:24:43.169 [INFO][4886] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali495df50c3a6 ContainerID="b4294c0b731ae014ec29673225aec9b00ec6aeac4da42dd5f9403c633217df3e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2w9m9" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-coredns--7c65d6cfc9--2w9m9-eth0" May 27 03:24:43.205375 containerd[1560]: 2025-05-27 03:24:43.177 [INFO][4886] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4294c0b731ae014ec29673225aec9b00ec6aeac4da42dd5f9403c633217df3e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2w9m9" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-coredns--7c65d6cfc9--2w9m9-eth0" May 27 03:24:43.205375 containerd[1560]: 2025-05-27 03:24:43.178 [INFO][4886] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b4294c0b731ae014ec29673225aec9b00ec6aeac4da42dd5f9403c633217df3e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2w9m9" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-coredns--7c65d6cfc9--2w9m9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--e--876c439243-k8s-coredns--7c65d6cfc9--2w9m9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ad1f604e-228a-46e6-8d84-3e756063a5a6", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 24, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-e-876c439243", ContainerID:"b4294c0b731ae014ec29673225aec9b00ec6aeac4da42dd5f9403c633217df3e", Pod:"coredns-7c65d6cfc9-2w9m9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali495df50c3a6", MAC:"c2:bf:c0:c4:50:7c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:43.205375 containerd[1560]: 2025-05-27 03:24:43.192 [INFO][4886] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b4294c0b731ae014ec29673225aec9b00ec6aeac4da42dd5f9403c633217df3e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2w9m9" WorkloadEndpoint="ci--4344--0--0--e--876c439243-k8s-coredns--7c65d6cfc9--2w9m9-eth0" May 27 03:24:43.263333 containerd[1560]: time="2025-05-27T03:24:43.262058173Z" level=info msg="connecting to shim b4294c0b731ae014ec29673225aec9b00ec6aeac4da42dd5f9403c633217df3e" address="unix:///run/containerd/s/e8a0617db084be075252e05fd023692390117d53b511366a04d45e036ad6bb90" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:43.300555 systemd[1]: Started cri-containerd-b4294c0b731ae014ec29673225aec9b00ec6aeac4da42dd5f9403c633217df3e.scope - libcontainer container b4294c0b731ae014ec29673225aec9b00ec6aeac4da42dd5f9403c633217df3e. May 27 03:24:43.331662 kubelet[2917]: I0527 03:24:43.331598 2917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-597cd4d468-l9brq" podStartSLOduration=24.483064897 podStartE2EDuration="28.331575636s" podCreationTimestamp="2025-05-27 03:24:15 +0000 UTC" firstStartedPulling="2025-05-27 03:24:38.336387521 +0000 UTC m=+44.448188149" lastFinishedPulling="2025-05-27 03:24:42.18489825 +0000 UTC m=+48.296698888" observedRunningTime="2025-05-27 03:24:43.331344823 +0000 UTC m=+49.443145462" watchObservedRunningTime="2025-05-27 03:24:43.331575636 +0000 UTC m=+49.443376274" May 27 03:24:43.379911 containerd[1560]: time="2025-05-27T03:24:43.379862069Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"a73ec7aa467d4b847ab9fda0e370f39c6eaf2f6c78219b862ffb3f5a2d3288b2\" pid:4968 exited_at:{seconds:1748316283 nanos:379065725}" May 27 03:24:43.387981 containerd[1560]: time="2025-05-27T03:24:43.387832255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2w9m9,Uid:ad1f604e-228a-46e6-8d84-3e756063a5a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4294c0b731ae014ec29673225aec9b00ec6aeac4da42dd5f9403c633217df3e\"" May 27 03:24:43.392124 containerd[1560]: time="2025-05-27T03:24:43.392096446Z" level=info msg="CreateContainer within sandbox \"b4294c0b731ae014ec29673225aec9b00ec6aeac4da42dd5f9403c633217df3e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 03:24:43.407612 containerd[1560]: time="2025-05-27T03:24:43.406575778Z" level=info msg="Container 2624af6d4887ca34dc4e6370cae3b15144211f8188e5b666285a4772c56f6c26: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:43.415565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount796905625.mount: Deactivated successfully. May 27 03:24:43.420364 containerd[1560]: time="2025-05-27T03:24:43.419813960Z" level=info msg="CreateContainer within sandbox \"b4294c0b731ae014ec29673225aec9b00ec6aeac4da42dd5f9403c633217df3e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2624af6d4887ca34dc4e6370cae3b15144211f8188e5b666285a4772c56f6c26\"" May 27 03:24:43.431137 containerd[1560]: time="2025-05-27T03:24:43.431061016Z" level=info msg="StartContainer for \"2624af6d4887ca34dc4e6370cae3b15144211f8188e5b666285a4772c56f6c26\"" May 27 03:24:43.432841 containerd[1560]: time="2025-05-27T03:24:43.432809377Z" level=info msg="connecting to shim 2624af6d4887ca34dc4e6370cae3b15144211f8188e5b666285a4772c56f6c26" address="unix:///run/containerd/s/e8a0617db084be075252e05fd023692390117d53b511366a04d45e036ad6bb90" protocol=ttrpc version=3 May 27 03:24:43.458501 systemd[1]: Started cri-containerd-2624af6d4887ca34dc4e6370cae3b15144211f8188e5b666285a4772c56f6c26.scope - libcontainer container 2624af6d4887ca34dc4e6370cae3b15144211f8188e5b666285a4772c56f6c26. May 27 03:24:43.492992 containerd[1560]: time="2025-05-27T03:24:43.492897585Z" level=info msg="StartContainer for \"2624af6d4887ca34dc4e6370cae3b15144211f8188e5b666285a4772c56f6c26\" returns successfully" May 27 03:24:44.330204 systemd-networkd[1472]: cali495df50c3a6: Gained IPv6LL May 27 03:24:44.381865 kubelet[2917]: I0527 03:24:44.381790 2917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-2w9m9" podStartSLOduration=44.374164574 podStartE2EDuration="44.374164574s" podCreationTimestamp="2025-05-27 03:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:24:44.372881726 +0000 UTC m=+50.484682374" watchObservedRunningTime="2025-05-27 03:24:44.374164574 +0000 UTC m=+50.485965222" May 27 03:24:45.553027 containerd[1560]: time="2025-05-27T03:24:45.552975555Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:45.554102 containerd[1560]: time="2025-05-27T03:24:45.554074718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 27 03:24:45.556340 containerd[1560]: time="2025-05-27T03:24:45.556291167Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:45.560690 containerd[1560]: time="2025-05-27T03:24:45.560664385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:45.561228 containerd[1560]: time="2025-05-27T03:24:45.561209297Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 3.376071618s" May 27 03:24:45.561296 containerd[1560]: time="2025-05-27T03:24:45.561286572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 03:24:45.562565 containerd[1560]: time="2025-05-27T03:24:45.562536468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 03:24:45.564662 containerd[1560]: time="2025-05-27T03:24:45.564637231Z" level=info msg="CreateContainer within sandbox \"e2c47979994994b4d9188165ba51d83b2673c5503f35996453e95a63a45b02ee\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 03:24:45.576278 containerd[1560]: time="2025-05-27T03:24:45.576231078Z" level=info msg="Container 1e86b04a526f19fd99edb2ee8444b9a69ad88210ceb7d5eb58bce4561ce303d8: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:45.595323 containerd[1560]: time="2025-05-27T03:24:45.595271762Z" level=info msg="CreateContainer within sandbox \"e2c47979994994b4d9188165ba51d83b2673c5503f35996453e95a63a45b02ee\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1e86b04a526f19fd99edb2ee8444b9a69ad88210ceb7d5eb58bce4561ce303d8\"" May 27 03:24:45.596150 containerd[1560]: time="2025-05-27T03:24:45.596062196Z" level=info msg="StartContainer for \"1e86b04a526f19fd99edb2ee8444b9a69ad88210ceb7d5eb58bce4561ce303d8\"" May 27 03:24:45.597341 containerd[1560]: time="2025-05-27T03:24:45.597287895Z" level=info msg="connecting to shim 1e86b04a526f19fd99edb2ee8444b9a69ad88210ceb7d5eb58bce4561ce303d8" address="unix:///run/containerd/s/61bc140460ee3dfe8fe09ae30e59f1396ecd014d7c4e0ce4c9c0dd3ea09bcbf2" protocol=ttrpc version=3 May 27 03:24:45.620458 systemd[1]: Started cri-containerd-1e86b04a526f19fd99edb2ee8444b9a69ad88210ceb7d5eb58bce4561ce303d8.scope - libcontainer container 1e86b04a526f19fd99edb2ee8444b9a69ad88210ceb7d5eb58bce4561ce303d8. May 27 03:24:45.663987 containerd[1560]: time="2025-05-27T03:24:45.663932177Z" level=info msg="StartContainer for \"1e86b04a526f19fd99edb2ee8444b9a69ad88210ceb7d5eb58bce4561ce303d8\" returns successfully" May 27 03:24:46.031826 containerd[1560]: time="2025-05-27T03:24:46.031769005Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:46.033683 containerd[1560]: time="2025-05-27T03:24:46.033651608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 27 03:24:46.035033 containerd[1560]: time="2025-05-27T03:24:46.035007653Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 472.341061ms" May 27 03:24:46.035067 containerd[1560]: time="2025-05-27T03:24:46.035057026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 03:24:46.036076 containerd[1560]: time="2025-05-27T03:24:46.036058575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 27 03:24:46.038510 containerd[1560]: time="2025-05-27T03:24:46.038456115Z" level=info msg="CreateContainer within sandbox \"08e05df50c99994a57a3f1d3ca68d0faeac4047e6e21792ba52dd1f7c337fe63\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 03:24:46.047485 containerd[1560]: time="2025-05-27T03:24:46.047449414Z" level=info msg="Container bf938f3bdfc56a1cf096b1e2311df2756af1c8a701a974c71f2c6c977f183afd: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:46.051376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1572334432.mount: Deactivated successfully. May 27 03:24:46.059318 containerd[1560]: time="2025-05-27T03:24:46.059268265Z" level=info msg="CreateContainer within sandbox \"08e05df50c99994a57a3f1d3ca68d0faeac4047e6e21792ba52dd1f7c337fe63\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bf938f3bdfc56a1cf096b1e2311df2756af1c8a701a974c71f2c6c977f183afd\"" May 27 03:24:46.060324 containerd[1560]: time="2025-05-27T03:24:46.060181749Z" level=info msg="StartContainer for \"bf938f3bdfc56a1cf096b1e2311df2756af1c8a701a974c71f2c6c977f183afd\"" May 27 03:24:46.061691 containerd[1560]: time="2025-05-27T03:24:46.061674430Z" level=info msg="connecting to shim bf938f3bdfc56a1cf096b1e2311df2756af1c8a701a974c71f2c6c977f183afd" address="unix:///run/containerd/s/27a5ec83cfb57f1ab140de5804ebd1540a9b9bead115dc2d7ef20128a0a0bd9e" protocol=ttrpc version=3 May 27 03:24:46.086466 systemd[1]: Started cri-containerd-bf938f3bdfc56a1cf096b1e2311df2756af1c8a701a974c71f2c6c977f183afd.scope - libcontainer container bf938f3bdfc56a1cf096b1e2311df2756af1c8a701a974c71f2c6c977f183afd. May 27 03:24:46.136424 containerd[1560]: time="2025-05-27T03:24:46.136350459Z" level=info msg="StartContainer for \"bf938f3bdfc56a1cf096b1e2311df2756af1c8a701a974c71f2c6c977f183afd\" returns successfully" May 27 03:24:46.343848 kubelet[2917]: I0527 03:24:46.343671 2917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d4cbf9b6-76kdv" podStartSLOduration=28.058885399 podStartE2EDuration="34.343650943s" podCreationTimestamp="2025-05-27 03:24:12 +0000 UTC" firstStartedPulling="2025-05-27 03:24:39.27735346 +0000 UTC m=+45.389154098" lastFinishedPulling="2025-05-27 03:24:45.562119005 +0000 UTC m=+51.673919642" observedRunningTime="2025-05-27 03:24:46.34276533 +0000 UTC m=+52.454565958" watchObservedRunningTime="2025-05-27 03:24:46.343650943 +0000 UTC m=+52.455451581" May 27 03:24:47.346982 kubelet[2917]: I0527 03:24:47.346341 2917 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:24:47.347600 kubelet[2917]: I0527 03:24:47.347576 2917 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:24:48.085005 containerd[1560]: time="2025-05-27T03:24:48.084933809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:48.086010 containerd[1560]: time="2025-05-27T03:24:48.085981255Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 27 03:24:48.087126 containerd[1560]: time="2025-05-27T03:24:48.087088814Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:48.088917 containerd[1560]: time="2025-05-27T03:24:48.088879635Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:48.089603 containerd[1560]: time="2025-05-27T03:24:48.089273024Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 2.053191074s" May 27 03:24:48.089603 containerd[1560]: time="2025-05-27T03:24:48.089316556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 27 03:24:48.090196 containerd[1560]: time="2025-05-27T03:24:48.090177861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:24:48.091673 containerd[1560]: time="2025-05-27T03:24:48.091641108Z" level=info msg="CreateContainer within sandbox \"c93ea7c16990791bf40083bdf097574eef712b5c937b00194110218c7ad799a9\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 27 03:24:48.203562 containerd[1560]: time="2025-05-27T03:24:48.203514662Z" level=info msg="Container fc4a6f44967f460ef8f6839cad9799886f35f93a22061e9b3bdb454c2460cff5: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:48.221276 containerd[1560]: time="2025-05-27T03:24:48.221245091Z" level=info msg="CreateContainer within sandbox \"c93ea7c16990791bf40083bdf097574eef712b5c937b00194110218c7ad799a9\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"fc4a6f44967f460ef8f6839cad9799886f35f93a22061e9b3bdb454c2460cff5\"" May 27 03:24:48.222072 containerd[1560]: time="2025-05-27T03:24:48.222059740Z" level=info msg="StartContainer for \"fc4a6f44967f460ef8f6839cad9799886f35f93a22061e9b3bdb454c2460cff5\"" May 27 03:24:48.223426 containerd[1560]: time="2025-05-27T03:24:48.223397761Z" level=info msg="connecting to shim fc4a6f44967f460ef8f6839cad9799886f35f93a22061e9b3bdb454c2460cff5" address="unix:///run/containerd/s/de714204548721429c6a6604b32b42ccba45ffce5e156987f9a49c0c3c1a42e9" protocol=ttrpc version=3 May 27 03:24:48.249454 systemd[1]: Started cri-containerd-fc4a6f44967f460ef8f6839cad9799886f35f93a22061e9b3bdb454c2460cff5.scope - libcontainer container fc4a6f44967f460ef8f6839cad9799886f35f93a22061e9b3bdb454c2460cff5. May 27 03:24:48.314860 containerd[1560]: time="2025-05-27T03:24:48.314812476Z" level=info msg="StartContainer for \"fc4a6f44967f460ef8f6839cad9799886f35f93a22061e9b3bdb454c2460cff5\" returns successfully" May 27 03:24:48.425331 containerd[1560]: time="2025-05-27T03:24:48.425235819Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:48.426834 containerd[1560]: time="2025-05-27T03:24:48.426787280Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:24:48.438265 containerd[1560]: time="2025-05-27T03:24:48.438190633Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:48.449784 kubelet[2917]: E0527 03:24:48.449613 2917 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:24:48.455825 kubelet[2917]: E0527 03:24:48.455766 2917 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:24:48.456262 kubelet[2917]: E0527 03:24:48.456179 2917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f7drb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-xwqrr_calico-system(9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:48.457217 containerd[1560]: time="2025-05-27T03:24:48.456917762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 27 03:24:48.466253 kubelet[2917]: E0527 03:24:48.466188 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:24:49.358094 kubelet[2917]: E0527 03:24:49.358036 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:24:49.454033 kubelet[2917]: I0527 03:24:49.453950 2917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d4cbf9b6-6l9vf" podStartSLOduration=31.800275143 podStartE2EDuration="37.453920143s" podCreationTimestamp="2025-05-27 03:24:12 +0000 UTC" firstStartedPulling="2025-05-27 03:24:40.382087704 +0000 UTC m=+46.493888342" lastFinishedPulling="2025-05-27 03:24:46.035732714 +0000 UTC m=+52.147533342" observedRunningTime="2025-05-27 03:24:46.365434766 +0000 UTC m=+52.477235404" watchObservedRunningTime="2025-05-27 03:24:49.453920143 +0000 UTC m=+55.565720822" May 27 03:24:50.644686 containerd[1560]: time="2025-05-27T03:24:50.644639364Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:50.665587 containerd[1560]: time="2025-05-27T03:24:50.665550494Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 27 03:24:50.666798 containerd[1560]: time="2025-05-27T03:24:50.666758953Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:50.670074 containerd[1560]: time="2025-05-27T03:24:50.669029634Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 2.212065987s" May 27 03:24:50.670074 containerd[1560]: time="2025-05-27T03:24:50.669060713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 27 03:24:50.670830 containerd[1560]: time="2025-05-27T03:24:50.670505425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:24:50.675402 containerd[1560]: time="2025-05-27T03:24:50.675357513Z" level=info msg="CreateContainer within sandbox \"c93ea7c16990791bf40083bdf097574eef712b5c937b00194110218c7ad799a9\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 27 03:24:50.696015 containerd[1560]: time="2025-05-27T03:24:50.695680287Z" level=info msg="Container 44cb114a0f92ddc05f55a353cb73b01742faf6ad3f93ce20c14cc41083700936: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:50.697383 containerd[1560]: time="2025-05-27T03:24:50.697344962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:50.704103 containerd[1560]: time="2025-05-27T03:24:50.704060007Z" level=info msg="CreateContainer within sandbox \"c93ea7c16990791bf40083bdf097574eef712b5c937b00194110218c7ad799a9\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"44cb114a0f92ddc05f55a353cb73b01742faf6ad3f93ce20c14cc41083700936\"" May 27 03:24:50.705083 containerd[1560]: time="2025-05-27T03:24:50.704889313Z" level=info msg="StartContainer for \"44cb114a0f92ddc05f55a353cb73b01742faf6ad3f93ce20c14cc41083700936\"" May 27 03:24:50.713179 containerd[1560]: time="2025-05-27T03:24:50.713115304Z" level=info msg="connecting to shim 44cb114a0f92ddc05f55a353cb73b01742faf6ad3f93ce20c14cc41083700936" address="unix:///run/containerd/s/de714204548721429c6a6604b32b42ccba45ffce5e156987f9a49c0c3c1a42e9" protocol=ttrpc version=3 May 27 03:24:50.736467 systemd[1]: Started cri-containerd-44cb114a0f92ddc05f55a353cb73b01742faf6ad3f93ce20c14cc41083700936.scope - libcontainer container 44cb114a0f92ddc05f55a353cb73b01742faf6ad3f93ce20c14cc41083700936. May 27 03:24:50.770357 containerd[1560]: time="2025-05-27T03:24:50.770299349Z" level=info msg="StartContainer for \"44cb114a0f92ddc05f55a353cb73b01742faf6ad3f93ce20c14cc41083700936\" returns successfully" May 27 03:24:50.973461 containerd[1560]: time="2025-05-27T03:24:50.973301025Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:51.012615 containerd[1560]: time="2025-05-27T03:24:50.975131560Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:51.012615 containerd[1560]: time="2025-05-27T03:24:50.975206231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:24:51.013880 kubelet[2917]: E0527 03:24:51.012880 2917 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:24:51.013880 kubelet[2917]: E0527 03:24:51.012931 2917 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:24:51.013880 kubelet[2917]: E0527 03:24:51.013034 2917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:bee51492bca3428982f094867f4c4710,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6jzjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-555bcbc6ff-596vx_calico-system(20923581-35ae-477b-83e9-35d75acd3c66): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:51.017365 containerd[1560]: time="2025-05-27T03:24:51.016113515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:24:51.295031 kubelet[2917]: I0527 03:24:51.294804 2917 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 27 03:24:51.295031 kubelet[2917]: I0527 03:24:51.294877 2917 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 27 03:24:51.324465 containerd[1560]: time="2025-05-27T03:24:51.324387316Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:51.326288 containerd[1560]: time="2025-05-27T03:24:51.326072840Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:51.326288 containerd[1560]: time="2025-05-27T03:24:51.326139896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:24:51.326515 kubelet[2917]: E0527 03:24:51.326463 2917 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:24:51.326575 kubelet[2917]: E0527 03:24:51.326519 2917 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:24:51.326682 kubelet[2917]: E0527 03:24:51.326636 2917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jzjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-555bcbc6ff-596vx_calico-system(20923581-35ae-477b-83e9-35d75acd3c66): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:51.328581 kubelet[2917]: E0527 03:24:51.328091 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:24:51.957859 containerd[1560]: time="2025-05-27T03:24:51.957777774Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"746a69e07a72ce56aa4954e37dd49671d7c74855ea8c23cc452e2b8092fe68c0\" pid:5200 exited_at:{seconds:1748316291 nanos:940301189}" May 27 03:24:51.975922 kubelet[2917]: I0527 03:24:51.975827 2917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2khsc" podStartSLOduration=26.746620183 podStartE2EDuration="36.975801967s" podCreationTimestamp="2025-05-27 03:24:15 +0000 UTC" firstStartedPulling="2025-05-27 03:24:40.44047092 +0000 UTC m=+46.552271558" lastFinishedPulling="2025-05-27 03:24:50.669652704 +0000 UTC m=+56.781453342" observedRunningTime="2025-05-27 03:24:51.386171019 +0000 UTC m=+57.497971677" watchObservedRunningTime="2025-05-27 03:24:51.975801967 +0000 UTC m=+58.087602625" May 27 03:24:53.476632 kubelet[2917]: I0527 03:24:53.476042 2917 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:24:55.411282 containerd[1560]: time="2025-05-27T03:24:55.411195602Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"e18d1cf061f4141befae1e4ff85353fe297503f400c48be6afa8b1d1f9662241\" pid:5232 exited_at:{seconds:1748316295 nanos:410851415}" May 27 03:24:56.880292 kubelet[2917]: I0527 03:24:56.879761 2917 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:24:57.286214 containerd[1560]: time="2025-05-27T03:24:57.286161999Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"3578bcf07a3152fd611cbcac9ebe9fdf77d717f194777ea9e60b222c17b627b3\" pid:5260 exited_at:{seconds:1748316297 nanos:285825548}" May 27 03:25:02.023029 containerd[1560]: time="2025-05-27T03:25:02.022980587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:25:02.320817 containerd[1560]: time="2025-05-27T03:25:02.320139863Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:25:02.322886 containerd[1560]: time="2025-05-27T03:25:02.322717152Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:25:02.322886 containerd[1560]: time="2025-05-27T03:25:02.322838830Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:25:02.323294 kubelet[2917]: E0527 03:25:02.323168 2917 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:25:02.323294 kubelet[2917]: E0527 03:25:02.323362 2917 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:25:02.325504 kubelet[2917]: E0527 03:25:02.323692 2917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f7drb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-xwqrr_calico-system(9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:25:02.325504 kubelet[2917]: E0527 03:25:02.325401 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:25:05.023202 kubelet[2917]: E0527 03:25:05.023011 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:25:14.023218 kubelet[2917]: E0527 03:25:14.023063 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:25:17.021423 containerd[1560]: time="2025-05-27T03:25:17.021378979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:25:17.357165 containerd[1560]: time="2025-05-27T03:25:17.356857404Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:25:17.358336 containerd[1560]: time="2025-05-27T03:25:17.358293230Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:25:17.358995 containerd[1560]: time="2025-05-27T03:25:17.358872598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:25:17.359207 kubelet[2917]: E0527 03:25:17.359173 2917 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:25:17.359589 kubelet[2917]: E0527 03:25:17.359568 2917 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:25:17.359758 kubelet[2917]: E0527 03:25:17.359734 2917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:bee51492bca3428982f094867f4c4710,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6jzjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-555bcbc6ff-596vx_calico-system(20923581-35ae-477b-83e9-35d75acd3c66): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:25:17.363041 containerd[1560]: time="2025-05-27T03:25:17.363003055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:25:17.719753 containerd[1560]: time="2025-05-27T03:25:17.719659073Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:25:17.721071 containerd[1560]: time="2025-05-27T03:25:17.721028455Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:25:17.721337 containerd[1560]: time="2025-05-27T03:25:17.721130297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:25:17.721572 kubelet[2917]: E0527 03:25:17.721363 2917 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:25:17.721572 kubelet[2917]: E0527 03:25:17.721424 2917 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:25:17.722370 kubelet[2917]: E0527 03:25:17.721621 2917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jzjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-555bcbc6ff-596vx_calico-system(20923581-35ae-477b-83e9-35d75acd3c66): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:25:17.723049 kubelet[2917]: E0527 03:25:17.722977 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:25:21.893677 containerd[1560]: time="2025-05-27T03:25:21.893166314Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"b494b62a0402008b4b7873bb6482ba6c8ec5bfb1a7011cab945477f3b2031828\" pid:5301 exited_at:{seconds:1748316321 nanos:892719866}" May 27 03:25:26.023683 containerd[1560]: time="2025-05-27T03:25:26.022945252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:25:26.320813 containerd[1560]: time="2025-05-27T03:25:26.320645654Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:25:26.322113 containerd[1560]: time="2025-05-27T03:25:26.322060452Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:25:26.322256 containerd[1560]: time="2025-05-27T03:25:26.322116457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:25:26.322531 kubelet[2917]: E0527 03:25:26.322450 2917 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:25:26.322908 kubelet[2917]: E0527 03:25:26.322539 2917 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:25:26.322908 kubelet[2917]: E0527 03:25:26.322723 2917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f7drb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-xwqrr_calico-system(9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:25:26.324360 kubelet[2917]: E0527 03:25:26.324326 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:25:27.307552 containerd[1560]: time="2025-05-27T03:25:27.307485022Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"07019c1cff44e2fcdcb33d169a73b1b1d7c220a8a68bc2aa1d9eadc0a2d3b3a2\" pid:5325 exited_at:{seconds:1748316327 nanos:307120907}" May 27 03:25:28.880424 systemd[1]: Started sshd@8-157.180.65.55:22-65.49.1.46:12499.service - OpenSSH per-connection server daemon (65.49.1.46:12499). May 27 03:25:28.969858 sshd[5335]: banner exchange: Connection from 65.49.1.46 port 12499: invalid format May 27 03:25:28.971341 systemd[1]: sshd@8-157.180.65.55:22-65.49.1.46:12499.service: Deactivated successfully. May 27 03:25:33.022411 kubelet[2917]: E0527 03:25:33.022351 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:25:38.021554 kubelet[2917]: E0527 03:25:38.020777 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:25:47.022714 kubelet[2917]: E0527 03:25:47.022624 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:25:50.023476 kubelet[2917]: E0527 03:25:50.022820 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:25:51.942712 containerd[1560]: time="2025-05-27T03:25:51.942649361Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"e1a8149553783e10406622d4b15c1078a028be7f02387e327144d92d43c32419\" pid:5355 exited_at:{seconds:1748316351 nanos:942187674}" May 27 03:25:55.428544 containerd[1560]: time="2025-05-27T03:25:55.419570430Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"a2d0e7f3ea69eb999c77ab9b390486c726e5a8aa23fada6569d8a775f1a01afd\" pid:5381 exited_at:{seconds:1748316355 nanos:419148117}" May 27 03:25:57.377500 containerd[1560]: time="2025-05-27T03:25:57.377449077Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"2d35869d71836c113f07e282f48407c651821eecb66c1ceff3a3fe87d928a3b6\" pid:5401 exited_at:{seconds:1748316357 nanos:377037143}" May 27 03:26:01.022076 containerd[1560]: time="2025-05-27T03:26:01.022013668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:26:01.363656 containerd[1560]: time="2025-05-27T03:26:01.363481199Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:26:01.365239 containerd[1560]: time="2025-05-27T03:26:01.365145916Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:26:01.365239 containerd[1560]: time="2025-05-27T03:26:01.365183516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:26:01.365633 kubelet[2917]: E0527 03:26:01.365392 2917 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:26:01.365633 kubelet[2917]: E0527 03:26:01.365451 2917 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:26:01.365633 kubelet[2917]: E0527 03:26:01.365602 2917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:bee51492bca3428982f094867f4c4710,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6jzjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-555bcbc6ff-596vx_calico-system(20923581-35ae-477b-83e9-35d75acd3c66): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:26:01.368488 containerd[1560]: time="2025-05-27T03:26:01.368452537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:26:01.663503 containerd[1560]: time="2025-05-27T03:26:01.663394655Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:26:01.665328 containerd[1560]: time="2025-05-27T03:26:01.665200728Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:26:01.666055 containerd[1560]: time="2025-05-27T03:26:01.665223120Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:26:01.666128 kubelet[2917]: E0527 03:26:01.665654 2917 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:26:01.666128 kubelet[2917]: E0527 03:26:01.665720 2917 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:26:01.666128 kubelet[2917]: E0527 03:26:01.665876 2917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jzjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-555bcbc6ff-596vx_calico-system(20923581-35ae-477b-83e9-35d75acd3c66): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:26:01.667602 kubelet[2917]: E0527 03:26:01.667488 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:26:04.020891 kubelet[2917]: E0527 03:26:04.020638 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:26:14.025267 kubelet[2917]: E0527 03:26:14.024926 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:26:18.022825 containerd[1560]: time="2025-05-27T03:26:18.022724973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:26:18.504030 containerd[1560]: time="2025-05-27T03:26:18.503911577Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:26:18.505532 containerd[1560]: time="2025-05-27T03:26:18.505400413Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:26:18.505841 containerd[1560]: time="2025-05-27T03:26:18.505428386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:26:18.506480 kubelet[2917]: E0527 03:26:18.505869 2917 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:26:18.506480 kubelet[2917]: E0527 03:26:18.505957 2917 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:26:18.506480 kubelet[2917]: E0527 03:26:18.506230 2917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f7drb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-xwqrr_calico-system(9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:26:18.508371 kubelet[2917]: E0527 03:26:18.507940 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:26:21.878449 containerd[1560]: time="2025-05-27T03:26:21.878378745Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"cca0d659b51c1cfe6512ef65cc645c84f0eae4480bff9832e5f9baaa9154ab6d\" pid:5457 exited_at:{seconds:1748316381 nanos:877826378}" May 27 03:26:27.299137 containerd[1560]: time="2025-05-27T03:26:27.299062655Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"8dcc5253c561b992baaf55ac49ee26d4a6f0643f05e6e4f944efb8c78ad0df8f\" pid:5481 exited_at:{seconds:1748316387 nanos:298710974}" May 27 03:26:28.023455 kubelet[2917]: E0527 03:26:28.023337 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:26:31.021534 kubelet[2917]: E0527 03:26:31.021185 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:26:41.021869 kubelet[2917]: E0527 03:26:41.021772 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:26:43.021992 kubelet[2917]: E0527 03:26:43.021904 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:26:51.867408 containerd[1560]: time="2025-05-27T03:26:51.867351675Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"0253fc80f2aeee410b16aa2a2d611fe21d71b294a3ba356eeb4f05953cd92ed2\" pid:5509 exited_at:{seconds:1748316411 nanos:866841598}" May 27 03:26:54.022846 kubelet[2917]: E0527 03:26:54.022398 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:26:55.394896 containerd[1560]: time="2025-05-27T03:26:55.394831986Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"d2e5e44d09c44c2965be51f6a5dbebaf03b6ff9d6d4448f15cb55908d63a772a\" pid:5536 exited_at:{seconds:1748316415 nanos:394434269}" May 27 03:26:56.022801 kubelet[2917]: E0527 03:26:56.022350 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:26:57.307747 containerd[1560]: time="2025-05-27T03:26:57.307610980Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"2063f3fdecec35f30e04874168b2ab80f0866876e9264e4b100dc6d49e15021c\" pid:5558 exited_at:{seconds:1748316417 nanos:306987420}" May 27 03:27:07.022477 kubelet[2917]: E0527 03:27:07.022399 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:27:11.020087 kubelet[2917]: E0527 03:27:11.020023 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:27:21.020842 kubelet[2917]: E0527 03:27:21.020785 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:27:21.861624 containerd[1560]: time="2025-05-27T03:27:21.861585163Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"c479e7bff9d3ae64a507f9aa51ff93dc2a90f0d5bc0c287648be8d28489caa3c\" pid:5587 exited_at:{seconds:1748316441 nanos:861199579}" May 27 03:27:24.483501 systemd[1]: Started sshd@9-157.180.65.55:22-64.62.197.77:25317.service - OpenSSH per-connection server daemon (64.62.197.77:25317). May 27 03:27:24.566047 sshd[5602]: banner exchange: Connection from 64.62.197.77 port 25317: invalid format May 27 03:27:24.567461 systemd[1]: sshd@9-157.180.65.55:22-64.62.197.77:25317.service: Deactivated successfully. May 27 03:27:26.020739 kubelet[2917]: E0527 03:27:26.020607 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:27:27.300233 containerd[1560]: time="2025-05-27T03:27:27.300151253Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"e85d9e5fff2b65f2e8ed2f95b89108ee6ad18c007ad8b3c1780ed2635828bc04\" pid:5618 exited_at:{seconds:1748316447 nanos:299193163}" May 27 03:27:32.021714 containerd[1560]: time="2025-05-27T03:27:32.021588940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:27:32.365435 containerd[1560]: time="2025-05-27T03:27:32.365213549Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:27:32.367124 containerd[1560]: time="2025-05-27T03:27:32.367029530Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:27:32.367228 containerd[1560]: time="2025-05-27T03:27:32.367179002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:27:32.367571 kubelet[2917]: E0527 03:27:32.367485 2917 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:27:32.368655 kubelet[2917]: E0527 03:27:32.367591 2917 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:27:32.368655 kubelet[2917]: E0527 03:27:32.368024 2917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:bee51492bca3428982f094867f4c4710,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6jzjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-555bcbc6ff-596vx_calico-system(20923581-35ae-477b-83e9-35d75acd3c66): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:27:32.370758 containerd[1560]: time="2025-05-27T03:27:32.370504378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:27:32.681576 containerd[1560]: time="2025-05-27T03:27:32.681448102Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:27:32.683209 containerd[1560]: time="2025-05-27T03:27:32.683134290Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:27:32.683393 containerd[1560]: time="2025-05-27T03:27:32.683290823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:27:32.683707 kubelet[2917]: E0527 03:27:32.683629 2917 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:27:32.683870 kubelet[2917]: E0527 03:27:32.683746 2917 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:27:32.684106 kubelet[2917]: E0527 03:27:32.683949 2917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jzjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-555bcbc6ff-596vx_calico-system(20923581-35ae-477b-83e9-35d75acd3c66): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:27:32.685825 kubelet[2917]: E0527 03:27:32.685658 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:27:39.021268 containerd[1560]: time="2025-05-27T03:27:39.021096149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:27:39.324786 containerd[1560]: time="2025-05-27T03:27:39.324591796Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:27:39.326007 containerd[1560]: time="2025-05-27T03:27:39.325960347Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:27:39.326295 containerd[1560]: time="2025-05-27T03:27:39.326034827Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:27:39.326783 kubelet[2917]: E0527 03:27:39.326144 2917 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:27:39.326783 kubelet[2917]: E0527 03:27:39.326210 2917 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:27:39.326783 kubelet[2917]: E0527 03:27:39.326406 2917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f7drb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-xwqrr_calico-system(9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:27:39.328506 kubelet[2917]: E0527 03:27:39.328453 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:27:47.022136 kubelet[2917]: E0527 03:27:47.022047 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:27:51.879130 containerd[1560]: time="2025-05-27T03:27:51.879039718Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"e4014e8426f6e78983629358c03423b34db8c0d5f89f3e30bdc8760100c6ceda\" pid:5663 exited_at:{seconds:1748316471 nanos:878578150}" May 27 03:27:52.021049 kubelet[2917]: E0527 03:27:52.021008 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:27:55.409884 containerd[1560]: time="2025-05-27T03:27:55.409822759Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"ad2fce0df2ae5e25511797b1b4cca745f70a682e2e7a34a146907c3973a52797\" pid:5689 exited_at:{seconds:1748316475 nanos:409276653}" May 27 03:27:57.290365 containerd[1560]: time="2025-05-27T03:27:57.290171594Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"daf7348988659e2a9e8d35d3685a6c8ab8767d8a278377112aa8028a47056f24\" pid:5710 exited_at:{seconds:1748316477 nanos:289084512}" May 27 03:27:59.023142 kubelet[2917]: E0527 03:27:59.023035 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:28:05.020962 kubelet[2917]: E0527 03:28:05.020773 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:28:12.022894 kubelet[2917]: E0527 03:28:12.022654 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:28:16.040609 kubelet[2917]: E0527 03:28:16.040487 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:28:21.863546 containerd[1560]: time="2025-05-27T03:28:21.863491506Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"53a4d7637a86a077310f923f158739a27103a47708f03282644fe064ffd43eba\" pid:5733 exited_at:{seconds:1748316501 nanos:862969405}" May 27 03:28:25.022590 kubelet[2917]: E0527 03:28:25.022519 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:28:27.279571 containerd[1560]: time="2025-05-27T03:28:27.279503609Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"bcbd181ae087d51d51446e4164c07ab99a3b2cdf0136b957e9f8e1acbca79a1a\" pid:5756 exited_at:{seconds:1748316507 nanos:279025962}" May 27 03:28:30.021597 kubelet[2917]: E0527 03:28:30.020827 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:28:40.021654 kubelet[2917]: E0527 03:28:40.021560 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:28:41.021014 kubelet[2917]: E0527 03:28:41.020924 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:28:45.387243 systemd[1]: Started sshd@10-157.180.65.55:22-139.178.89.65:43760.service - OpenSSH per-connection server daemon (139.178.89.65:43760). May 27 03:28:46.385380 sshd[5771]: Accepted publickey for core from 139.178.89.65 port 43760 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:28:46.387954 sshd-session[5771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:28:46.402471 systemd-logind[1551]: New session 8 of user core. May 27 03:28:46.408524 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 03:28:47.633350 sshd[5773]: Connection closed by 139.178.89.65 port 43760 May 27 03:28:47.634697 sshd-session[5771]: pam_unix(sshd:session): session closed for user core May 27 03:28:47.643274 systemd[1]: sshd@10-157.180.65.55:22-139.178.89.65:43760.service: Deactivated successfully. May 27 03:28:47.649446 systemd[1]: session-8.scope: Deactivated successfully. May 27 03:28:47.649746 systemd[1]: session-8.scope: Consumed 312ms CPU time, 64.2M memory peak. May 27 03:28:47.653058 systemd-logind[1551]: Session 8 logged out. Waiting for processes to exit. May 27 03:28:47.656807 systemd-logind[1551]: Removed session 8. May 27 03:28:48.946363 containerd[1560]: time="2025-05-27T03:28:48.908580365Z" level=warning msg="container event discarded" container=77c3ce4fbd82588ca97a1c4c80006b546232f8c1aa240e09bde88a3f4fd7108a type=CONTAINER_CREATED_EVENT May 27 03:28:48.946363 containerd[1560]: time="2025-05-27T03:28:48.946357147Z" level=warning msg="container event discarded" container=77c3ce4fbd82588ca97a1c4c80006b546232f8c1aa240e09bde88a3f4fd7108a type=CONTAINER_STARTED_EVENT May 27 03:28:48.990657 containerd[1560]: time="2025-05-27T03:28:48.990575010Z" level=warning msg="container event discarded" container=36ee42ca221c418cae2323cb96fdcba3a24c205318c6b0294b5fd6ff2d315ec0 type=CONTAINER_CREATED_EVENT May 27 03:28:48.990657 containerd[1560]: time="2025-05-27T03:28:48.990640523Z" level=warning msg="container event discarded" container=36ee42ca221c418cae2323cb96fdcba3a24c205318c6b0294b5fd6ff2d315ec0 type=CONTAINER_STARTED_EVENT May 27 03:28:48.990657 containerd[1560]: time="2025-05-27T03:28:48.990651094Z" level=warning msg="container event discarded" container=a3e3014fadbab106ddb47608e8bfd87d94c6c669c6c718507f314f1e3fb803fa type=CONTAINER_CREATED_EVENT May 27 03:28:48.990657 containerd[1560]: time="2025-05-27T03:28:48.990657325Z" level=warning msg="container event discarded" container=a3e3014fadbab106ddb47608e8bfd87d94c6c669c6c718507f314f1e3fb803fa type=CONTAINER_STARTED_EVENT May 27 03:28:48.990657 containerd[1560]: time="2025-05-27T03:28:48.990665210Z" level=warning msg="container event discarded" container=903f789bd31d396056a501ba51338aa381bdaeceed50cdcd494fbd438ed1c16e type=CONTAINER_CREATED_EVENT May 27 03:28:48.990657 containerd[1560]: time="2025-05-27T03:28:48.990670570Z" level=warning msg="container event discarded" container=389cdffc1ab0eedf86b2f748186cff29bb3b1b73855795f5c53aec09b414f52b type=CONTAINER_CREATED_EVENT May 27 03:28:48.990657 containerd[1560]: time="2025-05-27T03:28:48.990677583Z" level=warning msg="container event discarded" container=33d7a427bc912c1d8108d17f503f8a08ba85d7bf96901a9d7952ed252cd738f6 type=CONTAINER_CREATED_EVENT May 27 03:28:49.064141 containerd[1560]: time="2025-05-27T03:28:49.064064406Z" level=warning msg="container event discarded" container=903f789bd31d396056a501ba51338aa381bdaeceed50cdcd494fbd438ed1c16e type=CONTAINER_STARTED_EVENT May 27 03:28:49.064141 containerd[1560]: time="2025-05-27T03:28:49.064119240Z" level=warning msg="container event discarded" container=389cdffc1ab0eedf86b2f748186cff29bb3b1b73855795f5c53aec09b414f52b type=CONTAINER_STARTED_EVENT May 27 03:28:49.084447 containerd[1560]: time="2025-05-27T03:28:49.084335069Z" level=warning msg="container event discarded" container=33d7a427bc912c1d8108d17f503f8a08ba85d7bf96901a9d7952ed252cd738f6 type=CONTAINER_STARTED_EVENT May 27 03:28:51.020868 kubelet[2917]: E0527 03:28:51.020831 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:28:51.922898 containerd[1560]: time="2025-05-27T03:28:51.922798350Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"d2c3574a3df7746f7f4fb5cb5a4c31139b50962fa9c753f40b131783821da074\" pid:5798 exited_at:{seconds:1748316531 nanos:922053521}" May 27 03:28:52.806028 systemd[1]: Started sshd@11-157.180.65.55:22-139.178.89.65:43762.service - OpenSSH per-connection server daemon (139.178.89.65:43762). May 27 03:28:53.853379 sshd[5810]: Accepted publickey for core from 139.178.89.65 port 43762 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:28:53.857016 sshd-session[5810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:28:53.865418 systemd-logind[1551]: New session 9 of user core. May 27 03:28:53.872580 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 03:28:54.821544 sshd[5812]: Connection closed by 139.178.89.65 port 43762 May 27 03:28:54.822231 sshd-session[5810]: pam_unix(sshd:session): session closed for user core May 27 03:28:54.826378 systemd[1]: sshd@11-157.180.65.55:22-139.178.89.65:43762.service: Deactivated successfully. May 27 03:28:54.828962 systemd[1]: session-9.scope: Deactivated successfully. May 27 03:28:54.830653 systemd-logind[1551]: Session 9 logged out. Waiting for processes to exit. May 27 03:28:54.833030 systemd-logind[1551]: Removed session 9. May 27 03:28:55.020705 kubelet[2917]: E0527 03:28:55.020651 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:28:55.409455 containerd[1560]: time="2025-05-27T03:28:55.409387060Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"d4c626f53a8df6e58149b390d0e99a62d4d88637733c9c9ec695dc2e2921d1e0\" pid:5840 exited_at:{seconds:1748316535 nanos:408808142}" May 27 03:28:57.302558 containerd[1560]: time="2025-05-27T03:28:57.302455020Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"fda3d8ff6fa50823ca9ddad864cc4eb8bd3c910ef03231238fc90439889d0dc6\" pid:5863 exited_at:{seconds:1748316537 nanos:301471362}" May 27 03:29:00.000143 systemd[1]: Started sshd@12-157.180.65.55:22-139.178.89.65:58336.service - OpenSSH per-connection server daemon (139.178.89.65:58336). May 27 03:29:00.763157 containerd[1560]: time="2025-05-27T03:29:00.763033384Z" level=warning msg="container event discarded" container=4f21fd2da6a9d777689c6f7e502f86fc79cca9903513342446aa466a7efdb340 type=CONTAINER_CREATED_EVENT May 27 03:29:00.763157 containerd[1560]: time="2025-05-27T03:29:00.763135115Z" level=warning msg="container event discarded" container=4f21fd2da6a9d777689c6f7e502f86fc79cca9903513342446aa466a7efdb340 type=CONTAINER_STARTED_EVENT May 27 03:29:00.794483 containerd[1560]: time="2025-05-27T03:29:00.794383162Z" level=warning msg="container event discarded" container=0004022cf3d44dacc60455f3dc242f01fe161bad62b14f055091f72bdc46342d type=CONTAINER_CREATED_EVENT May 27 03:29:00.867842 containerd[1560]: time="2025-05-27T03:29:00.867755468Z" level=warning msg="container event discarded" container=0004022cf3d44dacc60455f3dc242f01fe161bad62b14f055091f72bdc46342d type=CONTAINER_STARTED_EVENT May 27 03:29:01.008893 sshd[5873]: Accepted publickey for core from 139.178.89.65 port 58336 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:29:01.010711 sshd-session[5873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:29:01.018361 systemd-logind[1551]: New session 10 of user core. May 27 03:29:01.024615 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 03:29:01.078466 containerd[1560]: time="2025-05-27T03:29:01.078367723Z" level=warning msg="container event discarded" container=8de8d631426da3d149cb076c0e0725a41c3018fcd8af9972f84377638af2c79e type=CONTAINER_CREATED_EVENT May 27 03:29:01.078466 containerd[1560]: time="2025-05-27T03:29:01.078441822Z" level=warning msg="container event discarded" container=8de8d631426da3d149cb076c0e0725a41c3018fcd8af9972f84377638af2c79e type=CONTAINER_STARTED_EVENT May 27 03:29:01.790196 sshd[5875]: Connection closed by 139.178.89.65 port 58336 May 27 03:29:01.791383 sshd-session[5873]: pam_unix(sshd:session): session closed for user core May 27 03:29:01.797713 systemd[1]: sshd@12-157.180.65.55:22-139.178.89.65:58336.service: Deactivated successfully. May 27 03:29:01.797773 systemd-logind[1551]: Session 10 logged out. Waiting for processes to exit. May 27 03:29:01.800682 systemd[1]: session-10.scope: Deactivated successfully. May 27 03:29:01.803103 systemd-logind[1551]: Removed session 10. May 27 03:29:04.626610 containerd[1560]: time="2025-05-27T03:29:04.626420505Z" level=warning msg="container event discarded" container=03409ea003ada58d3a703b36bc2fc64ad8aa306c2b26978c26c58a234a807518 type=CONTAINER_CREATED_EVENT May 27 03:29:04.690078 containerd[1560]: time="2025-05-27T03:29:04.689970769Z" level=warning msg="container event discarded" container=03409ea003ada58d3a703b36bc2fc64ad8aa306c2b26978c26c58a234a807518 type=CONTAINER_STARTED_EVENT May 27 03:29:05.022052 kubelet[2917]: E0527 03:29:05.021814 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:29:06.972529 systemd[1]: Started sshd@13-157.180.65.55:22-139.178.89.65:43328.service - OpenSSH per-connection server daemon (139.178.89.65:43328). May 27 03:29:07.570540 containerd[1560]: time="2025-05-27T03:29:07.570428265Z" level=warning msg="container event discarded" container=03409ea003ada58d3a703b36bc2fc64ad8aa306c2b26978c26c58a234a807518 type=CONTAINER_STOPPED_EVENT May 27 03:29:07.972709 sshd[5891]: Accepted publickey for core from 139.178.89.65 port 43328 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:29:07.974781 sshd-session[5891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:29:07.984618 systemd-logind[1551]: New session 11 of user core. May 27 03:29:07.993579 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 03:29:08.132858 containerd[1560]: time="2025-05-27T03:29:08.132774715Z" level=warning msg="container event discarded" container=ee6cfaeb5878b0d01ca44d402aa7ee5bb813a128765c37e0540c88fc7087f270 type=CONTAINER_CREATED_EVENT May 27 03:29:08.212115 containerd[1560]: time="2025-05-27T03:29:08.211951427Z" level=warning msg="container event discarded" container=ee6cfaeb5878b0d01ca44d402aa7ee5bb813a128765c37e0540c88fc7087f270 type=CONTAINER_STARTED_EVENT May 27 03:29:08.738436 sshd[5893]: Connection closed by 139.178.89.65 port 43328 May 27 03:29:08.739531 sshd-session[5891]: pam_unix(sshd:session): session closed for user core May 27 03:29:08.746393 systemd[1]: sshd@13-157.180.65.55:22-139.178.89.65:43328.service: Deactivated successfully. May 27 03:29:08.751779 systemd[1]: session-11.scope: Deactivated successfully. May 27 03:29:08.754628 systemd-logind[1551]: Session 11 logged out. Waiting for processes to exit. May 27 03:29:08.757496 systemd-logind[1551]: Removed session 11. May 27 03:29:10.025361 kubelet[2917]: E0527 03:29:10.024951 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:29:13.912074 systemd[1]: Started sshd@14-157.180.65.55:22-139.178.89.65:45944.service - OpenSSH per-connection server daemon (139.178.89.65:45944). May 27 03:29:14.908236 sshd[5907]: Accepted publickey for core from 139.178.89.65 port 45944 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:29:14.910482 sshd-session[5907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:29:14.918399 systemd-logind[1551]: New session 12 of user core. May 27 03:29:14.927607 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 03:29:15.690416 sshd[5909]: Connection closed by 139.178.89.65 port 45944 May 27 03:29:15.691367 sshd-session[5907]: pam_unix(sshd:session): session closed for user core May 27 03:29:15.697839 systemd[1]: sshd@14-157.180.65.55:22-139.178.89.65:45944.service: Deactivated successfully. May 27 03:29:15.702249 systemd[1]: session-12.scope: Deactivated successfully. May 27 03:29:15.704871 systemd-logind[1551]: Session 12 logged out. Waiting for processes to exit. May 27 03:29:15.707585 systemd-logind[1551]: Removed session 12. May 27 03:29:16.674594 containerd[1560]: time="2025-05-27T03:29:16.674492019Z" level=warning msg="container event discarded" container=faf89b9b5b7318cffc662fa48acb453fb03c93ad21627a8e41c74bb81b1daca3 type=CONTAINER_CREATED_EVENT May 27 03:29:16.674594 containerd[1560]: time="2025-05-27T03:29:16.674566670Z" level=warning msg="container event discarded" container=faf89b9b5b7318cffc662fa48acb453fb03c93ad21627a8e41c74bb81b1daca3 type=CONTAINER_STARTED_EVENT May 27 03:29:16.865437 containerd[1560]: time="2025-05-27T03:29:16.865294928Z" level=warning msg="container event discarded" container=ad3f0ab2b012898f691697013f7d63cd0411709abc2974212bb20843dd373b7f type=CONTAINER_CREATED_EVENT May 27 03:29:16.865437 containerd[1560]: time="2025-05-27T03:29:16.865403382Z" level=warning msg="container event discarded" container=ad3f0ab2b012898f691697013f7d63cd0411709abc2974212bb20843dd373b7f type=CONTAINER_STARTED_EVENT May 27 03:29:18.887374 containerd[1560]: time="2025-05-27T03:29:18.887264444Z" level=warning msg="container event discarded" container=13aeff8eee5ce2b661fe147b1da5843a052ce94da216617f511ae5ad6b6f6374 type=CONTAINER_CREATED_EVENT May 27 03:29:18.959346 containerd[1560]: time="2025-05-27T03:29:18.959050801Z" level=warning msg="container event discarded" container=13aeff8eee5ce2b661fe147b1da5843a052ce94da216617f511ae5ad6b6f6374 type=CONTAINER_STARTED_EVENT May 27 03:29:19.015431 containerd[1560]: time="2025-05-27T03:29:19.015329384Z" level=warning msg="container event discarded" container=13aeff8eee5ce2b661fe147b1da5843a052ce94da216617f511ae5ad6b6f6374 type=CONTAINER_STOPPED_EVENT May 27 03:29:20.022046 kubelet[2917]: E0527 03:29:20.021596 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:29:20.869775 systemd[1]: Started sshd@15-157.180.65.55:22-139.178.89.65:45956.service - OpenSSH per-connection server daemon (139.178.89.65:45956). May 27 03:29:21.854797 sshd[5937]: Accepted publickey for core from 139.178.89.65 port 45956 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:29:21.856880 sshd-session[5937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:29:21.858972 containerd[1560]: time="2025-05-27T03:29:21.858909727Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"46d16ce8e642080f3e12860c56c8c68cc8a01b64c7e0f51e2d4aba84d513304b\" pid:5951 exited_at:{seconds:1748316561 nanos:858145482}" May 27 03:29:21.863130 systemd-logind[1551]: New session 13 of user core. May 27 03:29:21.868542 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 03:29:21.894822 containerd[1560]: time="2025-05-27T03:29:21.894745212Z" level=warning msg="container event discarded" container=8cf523767de7405a589c613d491df19eea82087fb5c928ec8b56fade9cd23067 type=CONTAINER_CREATED_EVENT May 27 03:29:21.985409 containerd[1560]: time="2025-05-27T03:29:21.985276388Z" level=warning msg="container event discarded" container=8cf523767de7405a589c613d491df19eea82087fb5c928ec8b56fade9cd23067 type=CONTAINER_STARTED_EVENT May 27 03:29:22.642544 sshd[5961]: Connection closed by 139.178.89.65 port 45956 May 27 03:29:22.643545 sshd-session[5937]: pam_unix(sshd:session): session closed for user core May 27 03:29:22.649979 systemd[1]: sshd@15-157.180.65.55:22-139.178.89.65:45956.service: Deactivated successfully. May 27 03:29:22.653591 systemd[1]: session-13.scope: Deactivated successfully. May 27 03:29:22.655637 systemd-logind[1551]: Session 13 logged out. Waiting for processes to exit. May 27 03:29:22.658639 systemd-logind[1551]: Removed session 13. May 27 03:29:25.021495 kubelet[2917]: E0527 03:29:25.021407 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:29:26.185851 containerd[1560]: time="2025-05-27T03:29:26.185719783Z" level=warning msg="container event discarded" container=5bffa319100b67e5afff808a9e0b07d21dec7590be8afa2f79e3632350c9080e type=CONTAINER_CREATED_EVENT May 27 03:29:26.254514 containerd[1560]: time="2025-05-27T03:29:26.254404133Z" level=warning msg="container event discarded" container=5bffa319100b67e5afff808a9e0b07d21dec7590be8afa2f79e3632350c9080e type=CONTAINER_STARTED_EVENT May 27 03:29:26.805033 containerd[1560]: time="2025-05-27T03:29:26.804925361Z" level=warning msg="container event discarded" container=5bffa319100b67e5afff808a9e0b07d21dec7590be8afa2f79e3632350c9080e type=CONTAINER_STOPPED_EVENT May 27 03:29:27.313635 containerd[1560]: time="2025-05-27T03:29:27.313570045Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"49dd03383c0b55e0d71f87369479d780f1a0a882fb60bd25d2702122287d18ca\" pid:5993 exited_at:{seconds:1748316567 nanos:313093370}" May 27 03:29:27.814141 systemd[1]: Started sshd@16-157.180.65.55:22-139.178.89.65:41874.service - OpenSSH per-connection server daemon (139.178.89.65:41874). May 27 03:29:28.803300 sshd[6003]: Accepted publickey for core from 139.178.89.65 port 41874 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:29:28.804769 sshd-session[6003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:29:28.811954 systemd-logind[1551]: New session 14 of user core. May 27 03:29:28.820660 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 03:29:29.570538 sshd[6005]: Connection closed by 139.178.89.65 port 41874 May 27 03:29:29.571350 sshd-session[6003]: pam_unix(sshd:session): session closed for user core May 27 03:29:29.575432 systemd-logind[1551]: Session 14 logged out. Waiting for processes to exit. May 27 03:29:29.576182 systemd[1]: sshd@16-157.180.65.55:22-139.178.89.65:41874.service: Deactivated successfully. May 27 03:29:29.578199 systemd[1]: session-14.scope: Deactivated successfully. May 27 03:29:29.580171 systemd-logind[1551]: Removed session 14. May 27 03:29:34.747378 systemd[1]: Started sshd@17-157.180.65.55:22-139.178.89.65:52368.service - OpenSSH per-connection server daemon (139.178.89.65:52368). May 27 03:29:35.021675 kubelet[2917]: E0527 03:29:35.021399 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:29:35.294775 containerd[1560]: time="2025-05-27T03:29:35.294582021Z" level=warning msg="container event discarded" container=ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b type=CONTAINER_CREATED_EVENT May 27 03:29:35.442211 containerd[1560]: time="2025-05-27T03:29:35.442130248Z" level=warning msg="container event discarded" container=ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b type=CONTAINER_STARTED_EVENT May 27 03:29:35.746138 sshd[6020]: Accepted publickey for core from 139.178.89.65 port 52368 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:29:35.749141 sshd-session[6020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:29:35.759777 systemd-logind[1551]: New session 15 of user core. May 27 03:29:35.765623 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 03:29:36.496077 sshd[6023]: Connection closed by 139.178.89.65 port 52368 May 27 03:29:36.496746 sshd-session[6020]: pam_unix(sshd:session): session closed for user core May 27 03:29:36.501264 systemd[1]: sshd@17-157.180.65.55:22-139.178.89.65:52368.service: Deactivated successfully. May 27 03:29:36.503083 systemd[1]: session-15.scope: Deactivated successfully. May 27 03:29:36.504220 systemd-logind[1551]: Session 15 logged out. Waiting for processes to exit. May 27 03:29:36.505938 systemd-logind[1551]: Removed session 15. May 27 03:29:37.569283 containerd[1560]: time="2025-05-27T03:29:37.569104955Z" level=warning msg="container event discarded" container=35994e0ab62610f3a358787a7a33ffd0af91d62b991202d7690c7630ccf42d95 type=CONTAINER_CREATED_EVENT May 27 03:29:37.569283 containerd[1560]: time="2025-05-27T03:29:37.569216294Z" level=warning msg="container event discarded" container=35994e0ab62610f3a358787a7a33ffd0af91d62b991202d7690c7630ccf42d95 type=CONTAINER_STARTED_EVENT May 27 03:29:38.345464 containerd[1560]: time="2025-05-27T03:29:38.345354433Z" level=warning msg="container event discarded" container=bf600c6bf6c300fd986e2592eaaea620bc22e474bb01b4ce18013673794d700b type=CONTAINER_CREATED_EVENT May 27 03:29:38.345464 containerd[1560]: time="2025-05-27T03:29:38.345416699Z" level=warning msg="container event discarded" container=bf600c6bf6c300fd986e2592eaaea620bc22e474bb01b4ce18013673794d700b type=CONTAINER_STARTED_EVENT May 27 03:29:39.284739 containerd[1560]: time="2025-05-27T03:29:39.284611992Z" level=warning msg="container event discarded" container=e2c47979994994b4d9188165ba51d83b2673c5503f35996453e95a63a45b02ee type=CONTAINER_CREATED_EVENT May 27 03:29:39.284739 containerd[1560]: time="2025-05-27T03:29:39.284696690Z" level=warning msg="container event discarded" container=e2c47979994994b4d9188165ba51d83b2673c5503f35996453e95a63a45b02ee type=CONTAINER_STARTED_EVENT May 27 03:29:40.021473 kubelet[2917]: E0527 03:29:40.021106 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:29:40.390772 containerd[1560]: time="2025-05-27T03:29:40.390542316Z" level=warning msg="container event discarded" container=08e05df50c99994a57a3f1d3ca68d0faeac4047e6e21792ba52dd1f7c337fe63 type=CONTAINER_CREATED_EVENT May 27 03:29:40.390772 containerd[1560]: time="2025-05-27T03:29:40.390624882Z" level=warning msg="container event discarded" container=08e05df50c99994a57a3f1d3ca68d0faeac4047e6e21792ba52dd1f7c337fe63 type=CONTAINER_STARTED_EVENT May 27 03:29:40.448927 containerd[1560]: time="2025-05-27T03:29:40.448851595Z" level=warning msg="container event discarded" container=c93ea7c16990791bf40083bdf097574eef712b5c937b00194110218c7ad799a9 type=CONTAINER_CREATED_EVENT May 27 03:29:40.449111 containerd[1560]: time="2025-05-27T03:29:40.448947816Z" level=warning msg="container event discarded" container=c93ea7c16990791bf40083bdf097574eef712b5c937b00194110218c7ad799a9 type=CONTAINER_STARTED_EVENT May 27 03:29:41.454552 containerd[1560]: time="2025-05-27T03:29:41.454468397Z" level=warning msg="container event discarded" container=4c52555b352e777aa279f0596dccf56c8dc75ccfd97f66b5486fecae7b784fa7 type=CONTAINER_CREATED_EVENT May 27 03:29:41.454552 containerd[1560]: time="2025-05-27T03:29:41.454533328Z" level=warning msg="container event discarded" container=4c52555b352e777aa279f0596dccf56c8dc75ccfd97f66b5486fecae7b784fa7 type=CONTAINER_STARTED_EVENT May 27 03:29:41.528949 containerd[1560]: time="2025-05-27T03:29:41.528876450Z" level=warning msg="container event discarded" container=02dc94b802c0e58c670018403916dbf0478025b84cbf00931d3deaf215782fb7 type=CONTAINER_CREATED_EVENT May 27 03:29:41.528949 containerd[1560]: time="2025-05-27T03:29:41.528942304Z" level=warning msg="container event discarded" container=929a407c98c6f921318078db51fcdb057ed2bf3be9510d94d75e5600fefea35a type=CONTAINER_CREATED_EVENT May 27 03:29:41.529135 containerd[1560]: time="2025-05-27T03:29:41.528957333Z" level=warning msg="container event discarded" container=929a407c98c6f921318078db51fcdb057ed2bf3be9510d94d75e5600fefea35a type=CONTAINER_STARTED_EVENT May 27 03:29:41.614589 containerd[1560]: time="2025-05-27T03:29:41.614386843Z" level=warning msg="container event discarded" container=02dc94b802c0e58c670018403916dbf0478025b84cbf00931d3deaf215782fb7 type=CONTAINER_STARTED_EVENT May 27 03:29:41.667274 systemd[1]: Started sshd@18-157.180.65.55:22-139.178.89.65:52384.service - OpenSSH per-connection server daemon (139.178.89.65:52384). May 27 03:29:42.240331 containerd[1560]: time="2025-05-27T03:29:42.240241236Z" level=warning msg="container event discarded" container=6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4 type=CONTAINER_CREATED_EVENT May 27 03:29:42.441164 containerd[1560]: time="2025-05-27T03:29:42.441066597Z" level=warning msg="container event discarded" container=6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4 type=CONTAINER_STARTED_EVENT May 27 03:29:42.652608 sshd[6037]: Accepted publickey for core from 139.178.89.65 port 52384 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:29:42.654344 sshd-session[6037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:29:42.659071 systemd-logind[1551]: New session 16 of user core. May 27 03:29:42.664415 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 03:29:43.398016 containerd[1560]: time="2025-05-27T03:29:43.397901711Z" level=warning msg="container event discarded" container=b4294c0b731ae014ec29673225aec9b00ec6aeac4da42dd5f9403c633217df3e type=CONTAINER_CREATED_EVENT May 27 03:29:43.398714 containerd[1560]: time="2025-05-27T03:29:43.398242482Z" level=warning msg="container event discarded" container=b4294c0b731ae014ec29673225aec9b00ec6aeac4da42dd5f9403c633217df3e type=CONTAINER_STARTED_EVENT May 27 03:29:43.429039 containerd[1560]: time="2025-05-27T03:29:43.428960021Z" level=warning msg="container event discarded" container=2624af6d4887ca34dc4e6370cae3b15144211f8188e5b666285a4772c56f6c26 type=CONTAINER_CREATED_EVENT May 27 03:29:43.473191 sshd[6039]: Connection closed by 139.178.89.65 port 52384 May 27 03:29:43.474661 sshd-session[6037]: pam_unix(sshd:session): session closed for user core May 27 03:29:43.477479 systemd-logind[1551]: Session 16 logged out. Waiting for processes to exit. May 27 03:29:43.479007 systemd[1]: sshd@18-157.180.65.55:22-139.178.89.65:52384.service: Deactivated successfully. May 27 03:29:43.481664 systemd[1]: session-16.scope: Deactivated successfully. May 27 03:29:43.484493 systemd-logind[1551]: Removed session 16. May 27 03:29:43.502659 containerd[1560]: time="2025-05-27T03:29:43.502592128Z" level=warning msg="container event discarded" container=2624af6d4887ca34dc4e6370cae3b15144211f8188e5b666285a4772c56f6c26 type=CONTAINER_STARTED_EVENT May 27 03:29:45.605496 containerd[1560]: time="2025-05-27T03:29:45.605289534Z" level=warning msg="container event discarded" container=1e86b04a526f19fd99edb2ee8444b9a69ad88210ceb7d5eb58bce4561ce303d8 type=CONTAINER_CREATED_EVENT May 27 03:29:45.673901 containerd[1560]: time="2025-05-27T03:29:45.673809586Z" level=warning msg="container event discarded" container=1e86b04a526f19fd99edb2ee8444b9a69ad88210ceb7d5eb58bce4561ce303d8 type=CONTAINER_STARTED_EVENT May 27 03:29:46.021409 kubelet[2917]: E0527 03:29:46.021293 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:29:46.068767 containerd[1560]: time="2025-05-27T03:29:46.068613013Z" level=warning msg="container event discarded" container=bf938f3bdfc56a1cf096b1e2311df2756af1c8a701a974c71f2c6c977f183afd type=CONTAINER_CREATED_EVENT May 27 03:29:46.145234 containerd[1560]: time="2025-05-27T03:29:46.145052764Z" level=warning msg="container event discarded" container=bf938f3bdfc56a1cf096b1e2311df2756af1c8a701a974c71f2c6c977f183afd type=CONTAINER_STARTED_EVENT May 27 03:29:48.230647 containerd[1560]: time="2025-05-27T03:29:48.230579541Z" level=warning msg="container event discarded" container=fc4a6f44967f460ef8f6839cad9799886f35f93a22061e9b3bdb454c2460cff5 type=CONTAINER_CREATED_EVENT May 27 03:29:48.324494 containerd[1560]: time="2025-05-27T03:29:48.324436574Z" level=warning msg="container event discarded" container=fc4a6f44967f460ef8f6839cad9799886f35f93a22061e9b3bdb454c2460cff5 type=CONTAINER_STARTED_EVENT May 27 03:29:48.649663 systemd[1]: Started sshd@19-157.180.65.55:22-139.178.89.65:47904.service - OpenSSH per-connection server daemon (139.178.89.65:47904). May 27 03:29:49.651035 sshd[6053]: Accepted publickey for core from 139.178.89.65 port 47904 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:29:49.652145 sshd-session[6053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:29:49.660913 systemd-logind[1551]: New session 17 of user core. May 27 03:29:49.669461 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 03:29:50.406082 sshd[6055]: Connection closed by 139.178.89.65 port 47904 May 27 03:29:50.406508 sshd-session[6053]: pam_unix(sshd:session): session closed for user core May 27 03:29:50.412739 systemd-logind[1551]: Session 17 logged out. Waiting for processes to exit. May 27 03:29:50.413121 systemd[1]: sshd@19-157.180.65.55:22-139.178.89.65:47904.service: Deactivated successfully. May 27 03:29:50.414880 systemd[1]: session-17.scope: Deactivated successfully. May 27 03:29:50.418436 systemd-logind[1551]: Removed session 17. May 27 03:29:50.713549 containerd[1560]: time="2025-05-27T03:29:50.713365719Z" level=warning msg="container event discarded" container=44cb114a0f92ddc05f55a353cb73b01742faf6ad3f93ce20c14cc41083700936 type=CONTAINER_CREATED_EVENT May 27 03:29:50.779690 containerd[1560]: time="2025-05-27T03:29:50.779588055Z" level=warning msg="container event discarded" container=44cb114a0f92ddc05f55a353cb73b01742faf6ad3f93ce20c14cc41083700936 type=CONTAINER_STARTED_EVENT May 27 03:29:51.872772 containerd[1560]: time="2025-05-27T03:29:51.872702265Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"3f18165229c50e10424f7a03f5ed35c0eb8099f1eee0e160dc5e9b887fcf9b82\" pid:6079 exited_at:{seconds:1748316591 nanos:871887133}" May 27 03:29:54.022340 kubelet[2917]: E0527 03:29:54.022220 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:29:55.404030 containerd[1560]: time="2025-05-27T03:29:55.403969885Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"c4c3483fa861e110273e24b97f7b026c833e00c34cb67eb4955e55186300b01f\" pid:6105 exited_at:{seconds:1748316595 nanos:403748157}" May 27 03:29:55.574360 systemd[1]: Started sshd@20-157.180.65.55:22-139.178.89.65:43792.service - OpenSSH per-connection server daemon (139.178.89.65:43792). May 27 03:29:56.573535 sshd[6115]: Accepted publickey for core from 139.178.89.65 port 43792 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:29:56.575891 sshd-session[6115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:29:56.586418 systemd-logind[1551]: New session 18 of user core. May 27 03:29:56.589563 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 03:29:57.280586 containerd[1560]: time="2025-05-27T03:29:57.280257507Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"95eda8c64775b71758b88c594eb19b82b40d0656a3139442b0347730c5e39273\" pid:6139 exited_at:{seconds:1748316597 nanos:279458956}" May 27 03:29:57.361497 sshd[6117]: Connection closed by 139.178.89.65 port 43792 May 27 03:29:57.362695 sshd-session[6115]: pam_unix(sshd:session): session closed for user core May 27 03:29:57.368727 systemd[1]: sshd@20-157.180.65.55:22-139.178.89.65:43792.service: Deactivated successfully. May 27 03:29:57.372723 systemd[1]: session-18.scope: Deactivated successfully. May 27 03:29:57.375259 systemd-logind[1551]: Session 18 logged out. Waiting for processes to exit. May 27 03:29:57.378673 systemd-logind[1551]: Removed session 18. May 27 03:29:58.034364 kubelet[2917]: E0527 03:29:58.032886 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:30:02.533937 systemd[1]: Started sshd@21-157.180.65.55:22-139.178.89.65:43796.service - OpenSSH per-connection server daemon (139.178.89.65:43796). May 27 03:30:03.542451 sshd[6160]: Accepted publickey for core from 139.178.89.65 port 43796 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:30:03.544982 sshd-session[6160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:30:03.554803 systemd-logind[1551]: New session 19 of user core. May 27 03:30:03.560610 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 03:30:04.324638 sshd[6164]: Connection closed by 139.178.89.65 port 43796 May 27 03:30:04.325423 sshd-session[6160]: pam_unix(sshd:session): session closed for user core May 27 03:30:04.329280 systemd[1]: sshd@21-157.180.65.55:22-139.178.89.65:43796.service: Deactivated successfully. May 27 03:30:04.331444 systemd[1]: session-19.scope: Deactivated successfully. May 27 03:30:04.333897 systemd-logind[1551]: Session 19 logged out. Waiting for processes to exit. May 27 03:30:04.335368 systemd-logind[1551]: Removed session 19. May 27 03:30:05.021204 kubelet[2917]: E0527 03:30:05.020780 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:30:09.021080 kubelet[2917]: E0527 03:30:09.020989 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:30:09.495146 systemd[1]: Started sshd@22-157.180.65.55:22-139.178.89.65:37680.service - OpenSSH per-connection server daemon (139.178.89.65:37680). May 27 03:30:10.483088 sshd[6177]: Accepted publickey for core from 139.178.89.65 port 37680 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:30:10.485508 sshd-session[6177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:30:10.494242 systemd-logind[1551]: New session 20 of user core. May 27 03:30:10.500685 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 03:30:11.233975 sshd[6179]: Connection closed by 139.178.89.65 port 37680 May 27 03:30:11.235416 sshd-session[6177]: pam_unix(sshd:session): session closed for user core May 27 03:30:11.240011 systemd-logind[1551]: Session 20 logged out. Waiting for processes to exit. May 27 03:30:11.240645 systemd[1]: sshd@22-157.180.65.55:22-139.178.89.65:37680.service: Deactivated successfully. May 27 03:30:11.244196 systemd[1]: session-20.scope: Deactivated successfully. May 27 03:30:11.246754 systemd-logind[1551]: Removed session 20. May 27 03:30:16.022551 kubelet[2917]: E0527 03:30:16.021791 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:30:16.407542 systemd[1]: Started sshd@23-157.180.65.55:22-139.178.89.65:37616.service - OpenSSH per-connection server daemon (139.178.89.65:37616). May 27 03:30:17.399191 sshd[6192]: Accepted publickey for core from 139.178.89.65 port 37616 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:30:17.402166 sshd-session[6192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:30:17.412560 systemd-logind[1551]: New session 21 of user core. May 27 03:30:17.416585 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 03:30:18.194722 sshd[6194]: Connection closed by 139.178.89.65 port 37616 May 27 03:30:18.202005 sshd-session[6192]: pam_unix(sshd:session): session closed for user core May 27 03:30:18.209679 systemd-logind[1551]: Session 21 logged out. Waiting for processes to exit. May 27 03:30:18.209844 systemd[1]: sshd@23-157.180.65.55:22-139.178.89.65:37616.service: Deactivated successfully. May 27 03:30:18.213848 systemd[1]: session-21.scope: Deactivated successfully. May 27 03:30:18.217072 systemd-logind[1551]: Removed session 21. May 27 03:30:21.985485 containerd[1560]: time="2025-05-27T03:30:21.985421003Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"2cd3dde920a9ebc6a4dd9853da4cba4d2b83dbdb8f23e22d557337e10eedc336\" pid:6219 exited_at:{seconds:1748316621 nanos:985088330}" May 27 03:30:23.361206 systemd[1]: Started sshd@24-157.180.65.55:22-139.178.89.65:42510.service - OpenSSH per-connection server daemon (139.178.89.65:42510). May 27 03:30:24.039122 containerd[1560]: time="2025-05-27T03:30:24.039057059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:30:24.340067 containerd[1560]: time="2025-05-27T03:30:24.339800543Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:30:24.341898 containerd[1560]: time="2025-05-27T03:30:24.341817272Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:30:24.342038 containerd[1560]: time="2025-05-27T03:30:24.341934562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:30:24.342335 kubelet[2917]: E0527 03:30:24.342245 2917 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:30:24.343020 kubelet[2917]: E0527 03:30:24.342355 2917 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:30:24.343020 kubelet[2917]: E0527 03:30:24.342510 2917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:bee51492bca3428982f094867f4c4710,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6jzjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-555bcbc6ff-596vx_calico-system(20923581-35ae-477b-83e9-35d75acd3c66): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:30:24.345663 containerd[1560]: time="2025-05-27T03:30:24.345587136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:30:24.384069 sshd[6230]: Accepted publickey for core from 139.178.89.65 port 42510 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:30:24.388059 sshd-session[6230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:30:24.400236 systemd-logind[1551]: New session 22 of user core. May 27 03:30:24.406976 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 03:30:24.655568 containerd[1560]: time="2025-05-27T03:30:24.655480605Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:30:24.656914 containerd[1560]: time="2025-05-27T03:30:24.656851894Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:30:24.657290 containerd[1560]: time="2025-05-27T03:30:24.656989712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:30:24.657466 kubelet[2917]: E0527 03:30:24.657215 2917 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:30:24.657466 kubelet[2917]: E0527 03:30:24.657281 2917 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:30:24.658211 kubelet[2917]: E0527 03:30:24.657465 2917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jzjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-555bcbc6ff-596vx_calico-system(20923581-35ae-477b-83e9-35d75acd3c66): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:30:24.658924 kubelet[2917]: E0527 03:30:24.658847 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:30:25.211765 sshd[6232]: Connection closed by 139.178.89.65 port 42510 May 27 03:30:25.212761 sshd-session[6230]: pam_unix(sshd:session): session closed for user core May 27 03:30:25.219847 systemd[1]: sshd@24-157.180.65.55:22-139.178.89.65:42510.service: Deactivated successfully. May 27 03:30:25.223291 systemd[1]: session-22.scope: Deactivated successfully. May 27 03:30:25.225357 systemd-logind[1551]: Session 22 logged out. Waiting for processes to exit. May 27 03:30:25.228426 systemd-logind[1551]: Removed session 22. May 27 03:30:27.286558 containerd[1560]: time="2025-05-27T03:30:27.286439898Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"662699e33ae4888a3a0996991b8c654ccc1bfeddc69795487deaa40e9c0d2a9b\" pid:6256 exited_at:{seconds:1748316627 nanos:286200479}" May 27 03:30:30.385851 systemd[1]: Started sshd@25-157.180.65.55:22-139.178.89.65:42518.service - OpenSSH per-connection server daemon (139.178.89.65:42518). May 27 03:30:31.021573 containerd[1560]: time="2025-05-27T03:30:31.021518307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:30:31.322527 containerd[1560]: time="2025-05-27T03:30:31.322243025Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:30:31.324237 containerd[1560]: time="2025-05-27T03:30:31.324126706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:30:31.324237 containerd[1560]: time="2025-05-27T03:30:31.324200013Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:30:31.324711 kubelet[2917]: E0527 03:30:31.324605 2917 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:30:31.325409 kubelet[2917]: E0527 03:30:31.324719 2917 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:30:31.325409 kubelet[2917]: E0527 03:30:31.325095 2917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f7drb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-xwqrr_calico-system(9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:30:31.327003 kubelet[2917]: E0527 03:30:31.326896 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:30:31.410963 sshd[6266]: Accepted publickey for core from 139.178.89.65 port 42518 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:30:31.412882 sshd-session[6266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:30:31.418902 systemd-logind[1551]: New session 23 of user core. May 27 03:30:31.426552 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 03:30:32.209355 sshd[6270]: Connection closed by 139.178.89.65 port 42518 May 27 03:30:32.210000 sshd-session[6266]: pam_unix(sshd:session): session closed for user core May 27 03:30:32.215250 systemd[1]: sshd@25-157.180.65.55:22-139.178.89.65:42518.service: Deactivated successfully. May 27 03:30:32.218098 systemd[1]: session-23.scope: Deactivated successfully. May 27 03:30:32.221564 systemd-logind[1551]: Session 23 logged out. Waiting for processes to exit. May 27 03:30:32.223854 systemd-logind[1551]: Removed session 23. May 27 03:30:37.383373 systemd[1]: Started sshd@26-157.180.65.55:22-139.178.89.65:34992.service - OpenSSH per-connection server daemon (139.178.89.65:34992). May 27 03:30:38.388592 sshd[6283]: Accepted publickey for core from 139.178.89.65 port 34992 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:30:38.391168 sshd-session[6283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:30:38.400556 systemd-logind[1551]: New session 24 of user core. May 27 03:30:38.406529 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 03:30:39.026245 kubelet[2917]: E0527 03:30:39.026171 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:30:39.181613 sshd[6285]: Connection closed by 139.178.89.65 port 34992 May 27 03:30:39.182744 sshd-session[6283]: pam_unix(sshd:session): session closed for user core May 27 03:30:39.189571 systemd[1]: sshd@26-157.180.65.55:22-139.178.89.65:34992.service: Deactivated successfully. May 27 03:30:39.193562 systemd[1]: session-24.scope: Deactivated successfully. May 27 03:30:39.195937 systemd-logind[1551]: Session 24 logged out. Waiting for processes to exit. May 27 03:30:39.199872 systemd-logind[1551]: Removed session 24. May 27 03:30:44.358989 systemd[1]: Started sshd@27-157.180.65.55:22-139.178.89.65:51454.service - OpenSSH per-connection server daemon (139.178.89.65:51454). May 27 03:30:45.364137 sshd[6298]: Accepted publickey for core from 139.178.89.65 port 51454 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:30:45.366389 sshd-session[6298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:30:45.378296 systemd-logind[1551]: New session 25 of user core. May 27 03:30:45.381724 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 03:30:46.021404 kubelet[2917]: E0527 03:30:46.020933 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:30:46.127344 sshd[6300]: Connection closed by 139.178.89.65 port 51454 May 27 03:30:46.128164 sshd-session[6298]: pam_unix(sshd:session): session closed for user core May 27 03:30:46.132784 systemd[1]: sshd@27-157.180.65.55:22-139.178.89.65:51454.service: Deactivated successfully. May 27 03:30:46.135809 systemd[1]: session-25.scope: Deactivated successfully. May 27 03:30:46.138179 systemd-logind[1551]: Session 25 logged out. Waiting for processes to exit. May 27 03:30:46.140087 systemd-logind[1551]: Removed session 25. May 27 03:30:51.297391 systemd[1]: Started sshd@28-157.180.65.55:22-139.178.89.65:51470.service - OpenSSH per-connection server daemon (139.178.89.65:51470). May 27 03:30:51.878777 containerd[1560]: time="2025-05-27T03:30:51.878696681Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"236f80203bbb22f3eb4ac03427e4a9a242cebdd283991a2ab3fbe79a0ce7f45e\" pid:6327 exited_at:{seconds:1748316651 nanos:878108808}" May 27 03:30:52.282792 sshd[6313]: Accepted publickey for core from 139.178.89.65 port 51470 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:30:52.283995 sshd-session[6313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:30:52.291671 systemd-logind[1551]: New session 26 of user core. May 27 03:30:52.294434 systemd[1]: Started session-26.scope - Session 26 of User core. May 27 03:30:53.021970 kubelet[2917]: E0527 03:30:53.021399 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:30:53.055779 sshd[6338]: Connection closed by 139.178.89.65 port 51470 May 27 03:30:53.056513 sshd-session[6313]: pam_unix(sshd:session): session closed for user core May 27 03:30:53.061174 systemd-logind[1551]: Session 26 logged out. Waiting for processes to exit. May 27 03:30:53.061781 systemd[1]: sshd@28-157.180.65.55:22-139.178.89.65:51470.service: Deactivated successfully. May 27 03:30:53.064298 systemd[1]: session-26.scope: Deactivated successfully. May 27 03:30:53.066591 systemd-logind[1551]: Removed session 26. May 27 03:30:55.414088 containerd[1560]: time="2025-05-27T03:30:55.413975015Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"964880d1d6642e6cc1395f8f444ada2d7242fe9fd3336cd58017664763a4f05b\" pid:6364 exited_at:{seconds:1748316655 nanos:413661688}" May 27 03:30:57.304392 containerd[1560]: time="2025-05-27T03:30:57.304105876Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"96b751f9e349e4d0aa50b47ccf4c61c638e612be4d8d0ffa98108ceae639687b\" pid:6400 exited_at:{seconds:1748316657 nanos:303031342}" May 27 03:30:58.232004 systemd[1]: Started sshd@29-157.180.65.55:22-139.178.89.65:37538.service - OpenSSH per-connection server daemon (139.178.89.65:37538). May 27 03:30:59.245384 sshd[6411]: Accepted publickey for core from 139.178.89.65 port 37538 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:30:59.251029 sshd-session[6411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:30:59.264789 systemd-logind[1551]: New session 27 of user core. May 27 03:30:59.279738 systemd[1]: Started session-27.scope - Session 27 of User core. May 27 03:31:00.010885 sshd[6420]: Connection closed by 139.178.89.65 port 37538 May 27 03:31:00.011836 sshd-session[6411]: pam_unix(sshd:session): session closed for user core May 27 03:31:00.018573 systemd[1]: sshd@29-157.180.65.55:22-139.178.89.65:37538.service: Deactivated successfully. May 27 03:31:00.019256 systemd-logind[1551]: Session 27 logged out. Waiting for processes to exit. May 27 03:31:00.024991 systemd[1]: session-27.scope: Deactivated successfully. May 27 03:31:00.028538 systemd-logind[1551]: Removed session 27. May 27 03:31:01.020850 kubelet[2917]: E0527 03:31:01.020725 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:31:05.195247 systemd[1]: Started sshd@30-157.180.65.55:22-139.178.89.65:53106.service - OpenSSH per-connection server daemon (139.178.89.65:53106). May 27 03:31:06.021943 kubelet[2917]: E0527 03:31:06.021890 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:31:06.191472 sshd[6435]: Accepted publickey for core from 139.178.89.65 port 53106 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:31:06.193931 sshd-session[6435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:31:06.201929 systemd-logind[1551]: New session 28 of user core. May 27 03:31:06.207599 systemd[1]: Started session-28.scope - Session 28 of User core. May 27 03:31:07.002637 sshd[6437]: Connection closed by 139.178.89.65 port 53106 May 27 03:31:07.005636 sshd-session[6435]: pam_unix(sshd:session): session closed for user core May 27 03:31:07.014696 systemd[1]: sshd@30-157.180.65.55:22-139.178.89.65:53106.service: Deactivated successfully. May 27 03:31:07.018484 systemd[1]: session-28.scope: Deactivated successfully. May 27 03:31:07.020126 systemd-logind[1551]: Session 28 logged out. Waiting for processes to exit. May 27 03:31:07.022387 systemd-logind[1551]: Removed session 28. May 27 03:31:12.183079 systemd[1]: Started sshd@31-157.180.65.55:22-139.178.89.65:53116.service - OpenSSH per-connection server daemon (139.178.89.65:53116). May 27 03:31:13.225718 sshd[6451]: Accepted publickey for core from 139.178.89.65 port 53116 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:31:13.228580 sshd-session[6451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:31:13.235980 systemd-logind[1551]: New session 29 of user core. May 27 03:31:13.245737 systemd[1]: Started session-29.scope - Session 29 of User core. May 27 03:31:14.014721 sshd[6453]: Connection closed by 139.178.89.65 port 53116 May 27 03:31:14.015890 sshd-session[6451]: pam_unix(sshd:session): session closed for user core May 27 03:31:14.022022 kubelet[2917]: E0527 03:31:14.021929 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:31:14.027537 systemd[1]: sshd@31-157.180.65.55:22-139.178.89.65:53116.service: Deactivated successfully. May 27 03:31:14.034557 systemd[1]: session-29.scope: Deactivated successfully. May 27 03:31:14.037583 systemd-logind[1551]: Session 29 logged out. Waiting for processes to exit. May 27 03:31:14.040670 systemd-logind[1551]: Removed session 29. May 27 03:31:18.022268 kubelet[2917]: E0527 03:31:18.022193 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:31:19.193831 systemd[1]: Started sshd@32-157.180.65.55:22-139.178.89.65:58372.service - OpenSSH per-connection server daemon (139.178.89.65:58372). May 27 03:31:20.186232 sshd[6466]: Accepted publickey for core from 139.178.89.65 port 58372 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:31:20.187620 sshd-session[6466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:31:20.192979 systemd-logind[1551]: New session 30 of user core. May 27 03:31:20.197521 systemd[1]: Started session-30.scope - Session 30 of User core. May 27 03:31:21.024121 sshd[6468]: Connection closed by 139.178.89.65 port 58372 May 27 03:31:21.025708 sshd-session[6466]: pam_unix(sshd:session): session closed for user core May 27 03:31:21.042381 systemd[1]: sshd@32-157.180.65.55:22-139.178.89.65:58372.service: Deactivated successfully. May 27 03:31:21.047045 systemd[1]: session-30.scope: Deactivated successfully. May 27 03:31:21.051560 systemd-logind[1551]: Session 30 logged out. Waiting for processes to exit. May 27 03:31:21.053948 systemd-logind[1551]: Removed session 30. May 27 03:31:21.901169 containerd[1560]: time="2025-05-27T03:31:21.900962240Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"a812578dae164c86287261a6f40f0b922645bc058b8972f2fbc146d254ba451e\" pid:6493 exited_at:{seconds:1748316681 nanos:900496186}" May 27 03:31:26.021183 kubelet[2917]: E0527 03:31:26.021124 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:31:26.203504 systemd[1]: Started sshd@33-157.180.65.55:22-139.178.89.65:59708.service - OpenSSH per-connection server daemon (139.178.89.65:59708). May 27 03:31:27.239560 sshd[6507]: Accepted publickey for core from 139.178.89.65 port 59708 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:31:27.243063 sshd-session[6507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:31:27.253075 systemd-logind[1551]: New session 31 of user core. May 27 03:31:27.261486 systemd[1]: Started session-31.scope - Session 31 of User core. May 27 03:31:27.325612 containerd[1560]: time="2025-05-27T03:31:27.325513439Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"d421ed5607f55caf4fd2d5aea1e26f9b87fb38beb4c89f6feabd4be2d63e9064\" pid:6521 exited_at:{seconds:1748316687 nanos:324511009}" May 27 03:31:28.101392 sshd[6526]: Connection closed by 139.178.89.65 port 59708 May 27 03:31:28.107064 sshd-session[6507]: pam_unix(sshd:session): session closed for user core May 27 03:31:28.119959 systemd[1]: sshd@33-157.180.65.55:22-139.178.89.65:59708.service: Deactivated successfully. May 27 03:31:28.123095 systemd[1]: session-31.scope: Deactivated successfully. May 27 03:31:28.124727 systemd-logind[1551]: Session 31 logged out. Waiting for processes to exit. May 27 03:31:28.127453 systemd-logind[1551]: Removed session 31. May 27 03:31:29.101516 update_engine[1552]: I20250527 03:31:29.101339 1552 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 27 03:31:29.101516 update_engine[1552]: I20250527 03:31:29.101415 1552 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 27 03:31:29.104116 update_engine[1552]: I20250527 03:31:29.104067 1552 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 27 03:31:29.104936 update_engine[1552]: I20250527 03:31:29.104894 1552 omaha_request_params.cc:62] Current group set to alpha May 27 03:31:29.106335 update_engine[1552]: I20250527 03:31:29.105096 1552 update_attempter.cc:499] Already updated boot flags. Skipping. May 27 03:31:29.106335 update_engine[1552]: I20250527 03:31:29.105115 1552 update_attempter.cc:643] Scheduling an action processor start. May 27 03:31:29.106335 update_engine[1552]: I20250527 03:31:29.105141 1552 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 27 03:31:29.106335 update_engine[1552]: I20250527 03:31:29.105194 1552 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 27 03:31:29.106335 update_engine[1552]: I20250527 03:31:29.105269 1552 omaha_request_action.cc:271] Posting an Omaha request to disabled May 27 03:31:29.106335 update_engine[1552]: I20250527 03:31:29.105279 1552 omaha_request_action.cc:272] Request: May 27 03:31:29.106335 update_engine[1552]: May 27 03:31:29.106335 update_engine[1552]: May 27 03:31:29.106335 update_engine[1552]: May 27 03:31:29.106335 update_engine[1552]: May 27 03:31:29.106335 update_engine[1552]: May 27 03:31:29.106335 update_engine[1552]: May 27 03:31:29.106335 update_engine[1552]: May 27 03:31:29.106335 update_engine[1552]: May 27 03:31:29.106335 update_engine[1552]: I20250527 03:31:29.105288 1552 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 03:31:29.129121 locksmithd[1598]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 27 03:31:29.135873 update_engine[1552]: I20250527 03:31:29.135375 1552 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 03:31:29.137889 update_engine[1552]: I20250527 03:31:29.135793 1552 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 03:31:29.137968 update_engine[1552]: E20250527 03:31:29.137910 1552 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 03:31:29.138057 update_engine[1552]: I20250527 03:31:29.138022 1552 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 27 03:31:30.037009 kubelet[2917]: E0527 03:31:30.035513 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:31:33.275342 systemd[1]: Started sshd@34-157.180.65.55:22-139.178.89.65:40670.service - OpenSSH per-connection server daemon (139.178.89.65:40670). May 27 03:31:34.303142 sshd[6545]: Accepted publickey for core from 139.178.89.65 port 40670 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:31:34.305915 sshd-session[6545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:31:34.314821 systemd-logind[1551]: New session 32 of user core. May 27 03:31:34.320624 systemd[1]: Started session-32.scope - Session 32 of User core. May 27 03:31:35.231503 sshd[6547]: Connection closed by 139.178.89.65 port 40670 May 27 03:31:35.232426 sshd-session[6545]: pam_unix(sshd:session): session closed for user core May 27 03:31:35.237182 systemd[1]: sshd@34-157.180.65.55:22-139.178.89.65:40670.service: Deactivated successfully. May 27 03:31:35.239762 systemd[1]: session-32.scope: Deactivated successfully. May 27 03:31:35.241169 systemd-logind[1551]: Session 32 logged out. Waiting for processes to exit. May 27 03:31:35.243669 systemd-logind[1551]: Removed session 32. May 27 03:31:39.020276 kubelet[2917]: E0527 03:31:39.020218 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:31:39.042937 update_engine[1552]: I20250527 03:31:39.042815 1552 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 03:31:39.043518 update_engine[1552]: I20250527 03:31:39.043208 1552 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 03:31:39.043812 update_engine[1552]: I20250527 03:31:39.043751 1552 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 03:31:39.044346 update_engine[1552]: E20250527 03:31:39.044255 1552 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 03:31:39.044410 update_engine[1552]: I20250527 03:31:39.044364 1552 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 27 03:31:40.403549 systemd[1]: Started sshd@35-157.180.65.55:22-139.178.89.65:40676.service - OpenSSH per-connection server daemon (139.178.89.65:40676). May 27 03:31:41.022420 kubelet[2917]: E0527 03:31:41.022143 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:31:41.422255 sshd[6560]: Accepted publickey for core from 139.178.89.65 port 40676 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:31:41.424701 sshd-session[6560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:31:41.434112 systemd-logind[1551]: New session 33 of user core. May 27 03:31:41.442544 systemd[1]: Started session-33.scope - Session 33 of User core. May 27 03:31:42.215038 sshd[6563]: Connection closed by 139.178.89.65 port 40676 May 27 03:31:42.216063 sshd-session[6560]: pam_unix(sshd:session): session closed for user core May 27 03:31:42.223379 systemd[1]: sshd@35-157.180.65.55:22-139.178.89.65:40676.service: Deactivated successfully. May 27 03:31:42.226656 systemd[1]: session-33.scope: Deactivated successfully. May 27 03:31:42.228838 systemd-logind[1551]: Session 33 logged out. Waiting for processes to exit. May 27 03:31:42.232445 systemd-logind[1551]: Removed session 33. May 27 03:31:47.391668 systemd[1]: Started sshd@36-157.180.65.55:22-139.178.89.65:58806.service - OpenSSH per-connection server daemon (139.178.89.65:58806). May 27 03:31:48.392021 sshd[6576]: Accepted publickey for core from 139.178.89.65 port 58806 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:31:48.393057 sshd-session[6576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:31:48.398603 systemd-logind[1551]: New session 34 of user core. May 27 03:31:48.410597 systemd[1]: Started session-34.scope - Session 34 of User core. May 27 03:31:49.039494 update_engine[1552]: I20250527 03:31:49.039389 1552 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 03:31:49.040067 update_engine[1552]: I20250527 03:31:49.039721 1552 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 03:31:49.040142 update_engine[1552]: I20250527 03:31:49.040119 1552 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 03:31:49.040589 update_engine[1552]: E20250527 03:31:49.040537 1552 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 03:31:49.040667 update_engine[1552]: I20250527 03:31:49.040601 1552 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 27 03:31:49.191588 sshd[6578]: Connection closed by 139.178.89.65 port 58806 May 27 03:31:49.192529 sshd-session[6576]: pam_unix(sshd:session): session closed for user core May 27 03:31:49.198573 systemd[1]: sshd@36-157.180.65.55:22-139.178.89.65:58806.service: Deactivated successfully. May 27 03:31:49.202657 systemd[1]: session-34.scope: Deactivated successfully. May 27 03:31:49.204163 systemd-logind[1551]: Session 34 logged out. Waiting for processes to exit. May 27 03:31:49.207351 systemd-logind[1551]: Removed session 34. May 27 03:31:50.020829 kubelet[2917]: E0527 03:31:50.020330 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:31:51.859637 containerd[1560]: time="2025-05-27T03:31:51.859586607Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"51b9a0a7bdb0e3752a920f05f50b641804d666cf85da543dfcb35ee760ddb948\" pid:6604 exited_at:{seconds:1748316711 nanos:858957607}" May 27 03:31:52.023356 kubelet[2917]: E0527 03:31:52.022838 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:31:54.359792 systemd[1]: Started sshd@37-157.180.65.55:22-139.178.89.65:56282.service - OpenSSH per-connection server daemon (139.178.89.65:56282). May 27 03:31:55.342376 sshd[6619]: Accepted publickey for core from 139.178.89.65 port 56282 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:31:55.345652 sshd-session[6619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:31:55.359411 systemd-logind[1551]: New session 35 of user core. May 27 03:31:55.365794 systemd[1]: Started session-35.scope - Session 35 of User core. May 27 03:31:55.438360 containerd[1560]: time="2025-05-27T03:31:55.438295265Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"ca1ee4e932013d436c6c4e131b9e2ffa53fdca3420351e84c49c7afb57c715a7\" pid:6636 exited_at:{seconds:1748316715 nanos:437529718}" May 27 03:31:56.122194 sshd[6630]: Connection closed by 139.178.89.65 port 56282 May 27 03:31:56.121511 sshd-session[6619]: pam_unix(sshd:session): session closed for user core May 27 03:31:56.124222 systemd-logind[1551]: Session 35 logged out. Waiting for processes to exit. May 27 03:31:56.125936 systemd[1]: sshd@37-157.180.65.55:22-139.178.89.65:56282.service: Deactivated successfully. May 27 03:31:56.128062 systemd[1]: session-35.scope: Deactivated successfully. May 27 03:31:56.130698 systemd-logind[1551]: Removed session 35. May 27 03:31:57.306885 containerd[1560]: time="2025-05-27T03:31:57.306814455Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"38c28be896b3968ccaba6c712db51c4584ea1e5e826b953d1ae565204cb8ed8f\" pid:6667 exited_at:{seconds:1748316717 nanos:306519061}" May 27 03:31:59.041722 update_engine[1552]: I20250527 03:31:59.041633 1552 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 03:31:59.042408 update_engine[1552]: I20250527 03:31:59.041920 1552 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 03:31:59.042408 update_engine[1552]: I20250527 03:31:59.042192 1552 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 03:31:59.042706 update_engine[1552]: E20250527 03:31:59.042673 1552 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 03:31:59.042706 update_engine[1552]: I20250527 03:31:59.042701 1552 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 27 03:31:59.042706 update_engine[1552]: I20250527 03:31:59.042708 1552 omaha_request_action.cc:617] Omaha request response: May 27 03:31:59.042874 update_engine[1552]: E20250527 03:31:59.042778 1552 omaha_request_action.cc:636] Omaha request network transfer failed. May 27 03:31:59.042874 update_engine[1552]: I20250527 03:31:59.042793 1552 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 27 03:31:59.042874 update_engine[1552]: I20250527 03:31:59.042797 1552 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 27 03:31:59.042874 update_engine[1552]: I20250527 03:31:59.042801 1552 update_attempter.cc:306] Processing Done. May 27 03:31:59.042874 update_engine[1552]: E20250527 03:31:59.042814 1552 update_attempter.cc:619] Update failed. May 27 03:31:59.042874 update_engine[1552]: I20250527 03:31:59.042819 1552 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 27 03:31:59.042874 update_engine[1552]: I20250527 03:31:59.042824 1552 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 27 03:31:59.042874 update_engine[1552]: I20250527 03:31:59.042829 1552 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 27 03:31:59.043682 update_engine[1552]: I20250527 03:31:59.042991 1552 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 27 03:31:59.043682 update_engine[1552]: I20250527 03:31:59.043035 1552 omaha_request_action.cc:271] Posting an Omaha request to disabled May 27 03:31:59.043682 update_engine[1552]: I20250527 03:31:59.043043 1552 omaha_request_action.cc:272] Request: May 27 03:31:59.043682 update_engine[1552]: May 27 03:31:59.043682 update_engine[1552]: May 27 03:31:59.043682 update_engine[1552]: May 27 03:31:59.043682 update_engine[1552]: May 27 03:31:59.043682 update_engine[1552]: May 27 03:31:59.043682 update_engine[1552]: May 27 03:31:59.043682 update_engine[1552]: I20250527 03:31:59.043051 1552 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 03:31:59.043682 update_engine[1552]: I20250527 03:31:59.043144 1552 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 03:31:59.043682 update_engine[1552]: I20250527 03:31:59.043553 1552 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 03:31:59.044412 locksmithd[1598]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 27 03:31:59.044412 locksmithd[1598]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 27 03:31:59.044784 update_engine[1552]: E20250527 03:31:59.043702 1552 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 03:31:59.044784 update_engine[1552]: I20250527 03:31:59.043733 1552 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 27 03:31:59.044784 update_engine[1552]: I20250527 03:31:59.043740 1552 omaha_request_action.cc:617] Omaha request response: May 27 03:31:59.044784 update_engine[1552]: I20250527 03:31:59.043746 1552 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 27 03:31:59.044784 update_engine[1552]: I20250527 03:31:59.043751 1552 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 27 03:31:59.044784 update_engine[1552]: I20250527 03:31:59.043756 1552 update_attempter.cc:306] Processing Done. May 27 03:31:59.044784 update_engine[1552]: I20250527 03:31:59.043762 1552 update_attempter.cc:310] Error event sent. May 27 03:31:59.044784 update_engine[1552]: I20250527 03:31:59.043771 1552 update_check_scheduler.cc:74] Next update check in 42m5s May 27 03:32:01.298652 systemd[1]: Started sshd@38-157.180.65.55:22-139.178.89.65:56294.service - OpenSSH per-connection server daemon (139.178.89.65:56294). May 27 03:32:02.305581 sshd[6679]: Accepted publickey for core from 139.178.89.65 port 56294 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:32:02.307838 sshd-session[6679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:02.317379 systemd-logind[1551]: New session 36 of user core. May 27 03:32:02.322540 systemd[1]: Started session-36.scope - Session 36 of User core. May 27 03:32:03.020952 kubelet[2917]: E0527 03:32:03.020755 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:32:03.111473 sshd[6681]: Connection closed by 139.178.89.65 port 56294 May 27 03:32:03.112584 sshd-session[6679]: pam_unix(sshd:session): session closed for user core May 27 03:32:03.119423 systemd-logind[1551]: Session 36 logged out. Waiting for processes to exit. May 27 03:32:03.119715 systemd[1]: sshd@38-157.180.65.55:22-139.178.89.65:56294.service: Deactivated successfully. May 27 03:32:03.124119 systemd[1]: session-36.scope: Deactivated successfully. May 27 03:32:03.127728 systemd-logind[1551]: Removed session 36. May 27 03:32:05.034425 kubelet[2917]: E0527 03:32:05.034019 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:32:08.297781 systemd[1]: Started sshd@39-157.180.65.55:22-139.178.89.65:57668.service - OpenSSH per-connection server daemon (139.178.89.65:57668). May 27 03:32:09.332477 sshd[6694]: Accepted publickey for core from 139.178.89.65 port 57668 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:32:09.335169 sshd-session[6694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:09.344081 systemd-logind[1551]: New session 37 of user core. May 27 03:32:09.352583 systemd[1]: Started session-37.scope - Session 37 of User core. May 27 03:32:10.233898 sshd[6696]: Connection closed by 139.178.89.65 port 57668 May 27 03:32:10.235263 sshd-session[6694]: pam_unix(sshd:session): session closed for user core May 27 03:32:10.238958 systemd-logind[1551]: Session 37 logged out. Waiting for processes to exit. May 27 03:32:10.239138 systemd[1]: sshd@39-157.180.65.55:22-139.178.89.65:57668.service: Deactivated successfully. May 27 03:32:10.241444 systemd[1]: session-37.scope: Deactivated successfully. May 27 03:32:10.243923 systemd-logind[1551]: Removed session 37. May 27 03:32:15.020518 kubelet[2917]: E0527 03:32:15.020441 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:32:15.411786 systemd[1]: Started sshd@40-157.180.65.55:22-139.178.89.65:42822.service - OpenSSH per-connection server daemon (139.178.89.65:42822). May 27 03:32:16.443958 sshd[6710]: Accepted publickey for core from 139.178.89.65 port 42822 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:32:16.446341 sshd-session[6710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:16.457291 systemd-logind[1551]: New session 38 of user core. May 27 03:32:16.464558 systemd[1]: Started session-38.scope - Session 38 of User core. May 27 03:32:17.407671 sshd[6712]: Connection closed by 139.178.89.65 port 42822 May 27 03:32:17.412038 sshd-session[6710]: pam_unix(sshd:session): session closed for user core May 27 03:32:17.433500 systemd[1]: sshd@40-157.180.65.55:22-139.178.89.65:42822.service: Deactivated successfully. May 27 03:32:17.437597 systemd[1]: session-38.scope: Deactivated successfully. May 27 03:32:17.440530 systemd-logind[1551]: Session 38 logged out. Waiting for processes to exit. May 27 03:32:17.445499 systemd-logind[1551]: Removed session 38. May 27 03:32:19.031506 kubelet[2917]: E0527 03:32:19.031420 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:32:21.901791 containerd[1560]: time="2025-05-27T03:32:21.901594278Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"d46dbb7b723ea6f03b232b2d2a835fa5cdd9c329f2c8557e9fda1d93bc03d325\" pid:6736 exited_at:{seconds:1748316741 nanos:901030620}" May 27 03:32:22.580603 systemd[1]: Started sshd@41-157.180.65.55:22-139.178.89.65:42824.service - OpenSSH per-connection server daemon (139.178.89.65:42824). May 27 03:32:23.629266 sshd[6748]: Accepted publickey for core from 139.178.89.65 port 42824 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:32:23.632707 sshd-session[6748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:23.641077 systemd-logind[1551]: New session 39 of user core. May 27 03:32:23.648608 systemd[1]: Started session-39.scope - Session 39 of User core. May 27 03:32:24.494485 sshd[6750]: Connection closed by 139.178.89.65 port 42824 May 27 03:32:24.495233 sshd-session[6748]: pam_unix(sshd:session): session closed for user core May 27 03:32:24.500110 systemd[1]: sshd@41-157.180.65.55:22-139.178.89.65:42824.service: Deactivated successfully. May 27 03:32:24.504395 systemd[1]: session-39.scope: Deactivated successfully. May 27 03:32:24.506883 systemd-logind[1551]: Session 39 logged out. Waiting for processes to exit. May 27 03:32:24.510119 systemd-logind[1551]: Removed session 39. May 27 03:32:27.022290 kubelet[2917]: E0527 03:32:27.022094 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:32:27.336416 containerd[1560]: time="2025-05-27T03:32:27.336256288Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"89ca64ecd468af0795b8ab0890daa62dd3d219221c9d298d7fdb08f7b6653c73\" pid:6775 exited_at:{seconds:1748316747 nanos:335858662}" May 27 03:32:29.669348 systemd[1]: Started sshd@42-157.180.65.55:22-139.178.89.65:58470.service - OpenSSH per-connection server daemon (139.178.89.65:58470). May 27 03:32:30.033152 kubelet[2917]: E0527 03:32:30.032615 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:32:30.657340 sshd[6792]: Accepted publickey for core from 139.178.89.65 port 58470 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:32:30.659111 sshd-session[6792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:30.665376 systemd-logind[1551]: New session 40 of user core. May 27 03:32:30.671605 systemd[1]: Started session-40.scope - Session 40 of User core. May 27 03:32:31.571500 sshd[6794]: Connection closed by 139.178.89.65 port 58470 May 27 03:32:31.578118 sshd-session[6792]: pam_unix(sshd:session): session closed for user core May 27 03:32:31.586086 systemd[1]: sshd@42-157.180.65.55:22-139.178.89.65:58470.service: Deactivated successfully. May 27 03:32:31.586708 systemd-logind[1551]: Session 40 logged out. Waiting for processes to exit. May 27 03:32:31.590809 systemd[1]: session-40.scope: Deactivated successfully. May 27 03:32:31.594141 systemd-logind[1551]: Removed session 40. May 27 03:32:36.745958 systemd[1]: Started sshd@43-157.180.65.55:22-139.178.89.65:41376.service - OpenSSH per-connection server daemon (139.178.89.65:41376). May 27 03:32:37.771670 sshd[6823]: Accepted publickey for core from 139.178.89.65 port 41376 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:32:37.774835 sshd-session[6823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:37.785084 systemd-logind[1551]: New session 41 of user core. May 27 03:32:37.795737 systemd[1]: Started session-41.scope - Session 41 of User core. May 27 03:32:38.723556 sshd[6825]: Connection closed by 139.178.89.65 port 41376 May 27 03:32:38.724227 sshd-session[6823]: pam_unix(sshd:session): session closed for user core May 27 03:32:38.728722 systemd-logind[1551]: Session 41 logged out. Waiting for processes to exit. May 27 03:32:38.729552 systemd[1]: sshd@43-157.180.65.55:22-139.178.89.65:41376.service: Deactivated successfully. May 27 03:32:38.732005 systemd[1]: session-41.scope: Deactivated successfully. May 27 03:32:38.734205 systemd-logind[1551]: Removed session 41. May 27 03:32:42.021353 kubelet[2917]: E0527 03:32:42.020871 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:32:43.020345 kubelet[2917]: E0527 03:32:43.020219 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:32:43.896592 systemd[1]: Started sshd@44-157.180.65.55:22-139.178.89.65:53940.service - OpenSSH per-connection server daemon (139.178.89.65:53940). May 27 03:32:44.898683 sshd[6838]: Accepted publickey for core from 139.178.89.65 port 53940 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:32:44.901655 sshd-session[6838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:44.911741 systemd-logind[1551]: New session 42 of user core. May 27 03:32:44.922784 systemd[1]: Started session-42.scope - Session 42 of User core. May 27 03:32:45.709085 sshd[6840]: Connection closed by 139.178.89.65 port 53940 May 27 03:32:45.710181 sshd-session[6838]: pam_unix(sshd:session): session closed for user core May 27 03:32:45.716168 systemd-logind[1551]: Session 42 logged out. Waiting for processes to exit. May 27 03:32:45.716522 systemd[1]: sshd@44-157.180.65.55:22-139.178.89.65:53940.service: Deactivated successfully. May 27 03:32:45.720087 systemd[1]: session-42.scope: Deactivated successfully. May 27 03:32:45.723919 systemd-logind[1551]: Removed session 42. May 27 03:32:50.875380 systemd[1]: Started sshd@45-157.180.65.55:22-139.178.89.65:53956.service - OpenSSH per-connection server daemon (139.178.89.65:53956). May 27 03:32:51.863394 sshd[6853]: Accepted publickey for core from 139.178.89.65 port 53956 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:32:51.866100 sshd-session[6853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:51.874119 systemd-logind[1551]: New session 43 of user core. May 27 03:32:51.882574 systemd[1]: Started session-43.scope - Session 43 of User core. May 27 03:32:51.920613 containerd[1560]: time="2025-05-27T03:32:51.920430881Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"9e39a9e3d93a130ce20f6ef98160fdc86e4387ca5e497fcafb9bd26a67a0417b\" pid:6868 exited_at:{seconds:1748316771 nanos:919924330}" May 27 03:32:52.839441 sshd[6879]: Connection closed by 139.178.89.65 port 53956 May 27 03:32:52.840482 sshd-session[6853]: pam_unix(sshd:session): session closed for user core May 27 03:32:52.844933 systemd[1]: sshd@45-157.180.65.55:22-139.178.89.65:53956.service: Deactivated successfully. May 27 03:32:52.848012 systemd[1]: session-43.scope: Deactivated successfully. May 27 03:32:52.849107 systemd-logind[1551]: Session 43 logged out. Waiting for processes to exit. May 27 03:32:52.851275 systemd-logind[1551]: Removed session 43. May 27 03:32:53.006724 systemd[1]: Started sshd@46-157.180.65.55:22-139.178.89.65:53966.service - OpenSSH per-connection server daemon (139.178.89.65:53966). May 27 03:32:53.021166 kubelet[2917]: E0527 03:32:53.021106 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:32:53.998754 sshd[6892]: Accepted publickey for core from 139.178.89.65 port 53966 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:32:54.000456 sshd-session[6892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:54.006052 systemd-logind[1551]: New session 44 of user core. May 27 03:32:54.011535 systemd[1]: Started session-44.scope - Session 44 of User core. May 27 03:32:54.890210 sshd[6894]: Connection closed by 139.178.89.65 port 53966 May 27 03:32:54.893296 sshd-session[6892]: pam_unix(sshd:session): session closed for user core May 27 03:32:54.899191 systemd[1]: sshd@46-157.180.65.55:22-139.178.89.65:53966.service: Deactivated successfully. May 27 03:32:54.899652 systemd-logind[1551]: Session 44 logged out. Waiting for processes to exit. May 27 03:32:54.901289 systemd[1]: session-44.scope: Deactivated successfully. May 27 03:32:54.903134 systemd-logind[1551]: Removed session 44. May 27 03:32:55.063662 systemd[1]: Started sshd@47-157.180.65.55:22-139.178.89.65:43290.service - OpenSSH per-connection server daemon (139.178.89.65:43290). May 27 03:32:55.397836 containerd[1560]: time="2025-05-27T03:32:55.397757652Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"23bcc866268e6c62e23aaf08f612e27a06bfa819dcf411bb0d83cae2709d4df0\" pid:6918 exited_at:{seconds:1748316775 nanos:397293561}" May 27 03:32:56.034057 kubelet[2917]: E0527 03:32:56.033970 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:32:56.074260 sshd[6905]: Accepted publickey for core from 139.178.89.65 port 43290 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:32:56.079070 sshd-session[6905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:56.088680 systemd-logind[1551]: New session 45 of user core. May 27 03:32:56.098741 systemd[1]: Started session-45.scope - Session 45 of User core. May 27 03:32:56.975858 sshd[6927]: Connection closed by 139.178.89.65 port 43290 May 27 03:32:56.976382 sshd-session[6905]: pam_unix(sshd:session): session closed for user core May 27 03:32:56.979267 systemd-logind[1551]: Session 45 logged out. Waiting for processes to exit. May 27 03:32:56.979573 systemd[1]: sshd@47-157.180.65.55:22-139.178.89.65:43290.service: Deactivated successfully. May 27 03:32:56.982531 systemd[1]: session-45.scope: Deactivated successfully. May 27 03:32:56.985051 systemd-logind[1551]: Removed session 45. May 27 03:32:57.276075 containerd[1560]: time="2025-05-27T03:32:57.275949392Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"6ae0292519030aa895eff532494ee516f9d6c64a7099e71845fe4d68eec602fd\" pid:6949 exited_at:{seconds:1748316777 nanos:275501000}" May 27 03:33:02.149660 systemd[1]: Started sshd@48-157.180.65.55:22-139.178.89.65:43300.service - OpenSSH per-connection server daemon (139.178.89.65:43300). May 27 03:33:03.190923 sshd[6967]: Accepted publickey for core from 139.178.89.65 port 43300 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:33:03.193594 sshd-session[6967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:33:03.203489 systemd-logind[1551]: New session 46 of user core. May 27 03:33:03.210691 systemd[1]: Started session-46.scope - Session 46 of User core. May 27 03:33:04.033754 kubelet[2917]: E0527 03:33:04.033662 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:33:04.160087 sshd[6969]: Connection closed by 139.178.89.65 port 43300 May 27 03:33:04.160728 sshd-session[6967]: pam_unix(sshd:session): session closed for user core May 27 03:33:04.174031 systemd-logind[1551]: Session 46 logged out. Waiting for processes to exit. May 27 03:33:04.174992 systemd[1]: sshd@48-157.180.65.55:22-139.178.89.65:43300.service: Deactivated successfully. May 27 03:33:04.178558 systemd[1]: session-46.scope: Deactivated successfully. May 27 03:33:04.182428 systemd-logind[1551]: Removed session 46. May 27 03:33:09.022153 kubelet[2917]: E0527 03:33:09.022050 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:33:09.344376 systemd[1]: Started sshd@49-157.180.65.55:22-139.178.89.65:35036.service - OpenSSH per-connection server daemon (139.178.89.65:35036). May 27 03:33:10.364878 sshd[6983]: Accepted publickey for core from 139.178.89.65 port 35036 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:33:10.367054 sshd-session[6983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:33:10.375871 systemd-logind[1551]: New session 47 of user core. May 27 03:33:10.381554 systemd[1]: Started session-47.scope - Session 47 of User core. May 27 03:33:11.243295 sshd[6985]: Connection closed by 139.178.89.65 port 35036 May 27 03:33:11.244114 sshd-session[6983]: pam_unix(sshd:session): session closed for user core May 27 03:33:11.249528 systemd[1]: sshd@49-157.180.65.55:22-139.178.89.65:35036.service: Deactivated successfully. May 27 03:33:11.253738 systemd[1]: session-47.scope: Deactivated successfully. May 27 03:33:11.255728 systemd-logind[1551]: Session 47 logged out. Waiting for processes to exit. May 27 03:33:11.258438 systemd-logind[1551]: Removed session 47. May 27 03:33:16.418059 systemd[1]: Started sshd@50-157.180.65.55:22-139.178.89.65:49368.service - OpenSSH per-connection server daemon (139.178.89.65:49368). May 27 03:33:17.411650 sshd[6997]: Accepted publickey for core from 139.178.89.65 port 49368 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:33:17.414095 sshd-session[6997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:33:17.422852 systemd-logind[1551]: New session 48 of user core. May 27 03:33:17.431605 systemd[1]: Started session-48.scope - Session 48 of User core. May 27 03:33:18.041080 kubelet[2917]: E0527 03:33:18.040282 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:33:18.324027 sshd[6999]: Connection closed by 139.178.89.65 port 49368 May 27 03:33:18.327567 sshd-session[6997]: pam_unix(sshd:session): session closed for user core May 27 03:33:18.335027 systemd-logind[1551]: Session 48 logged out. Waiting for processes to exit. May 27 03:33:18.336057 systemd[1]: sshd@50-157.180.65.55:22-139.178.89.65:49368.service: Deactivated successfully. May 27 03:33:18.339668 systemd[1]: session-48.scope: Deactivated successfully. May 27 03:33:18.342288 systemd-logind[1551]: Removed session 48. May 27 03:33:21.884115 containerd[1560]: time="2025-05-27T03:33:21.884053470Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"56878a76aae2a88c791e69f38bb192b531b9a4b6417cffe5685cb05524c0f1ea\" pid:7022 exited_at:{seconds:1748316801 nanos:882671346}" May 27 03:33:23.495996 systemd[1]: Started sshd@51-157.180.65.55:22-139.178.89.65:34244.service - OpenSSH per-connection server daemon (139.178.89.65:34244). May 27 03:33:24.025929 kubelet[2917]: E0527 03:33:24.025841 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:33:24.535295 sshd[7035]: Accepted publickey for core from 139.178.89.65 port 34244 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:33:24.539282 sshd-session[7035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:33:24.549176 systemd-logind[1551]: New session 49 of user core. May 27 03:33:24.554544 systemd[1]: Started session-49.scope - Session 49 of User core. May 27 03:33:25.814127 sshd[7039]: Connection closed by 139.178.89.65 port 34244 May 27 03:33:25.814898 sshd-session[7035]: pam_unix(sshd:session): session closed for user core May 27 03:33:25.818839 systemd[1]: sshd@51-157.180.65.55:22-139.178.89.65:34244.service: Deactivated successfully. May 27 03:33:25.822136 systemd[1]: session-49.scope: Deactivated successfully. May 27 03:33:25.825236 systemd-logind[1551]: Session 49 logged out. Waiting for processes to exit. May 27 03:33:25.827179 systemd-logind[1551]: Removed session 49. May 27 03:33:27.295677 containerd[1560]: time="2025-05-27T03:33:27.295151241Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"2d4cf5f4858b0d01330bcc0fc41c21c91e8d335f435c8ad59be706f975d66f04\" pid:7063 exited_at:{seconds:1748316807 nanos:294875123}" May 27 03:33:30.988565 systemd[1]: Started sshd@52-157.180.65.55:22-139.178.89.65:34250.service - OpenSSH per-connection server daemon (139.178.89.65:34250). May 27 03:33:31.996881 sshd[7073]: Accepted publickey for core from 139.178.89.65 port 34250 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:33:31.998699 sshd-session[7073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:33:32.005593 systemd-logind[1551]: New session 50 of user core. May 27 03:33:32.011456 systemd[1]: Started session-50.scope - Session 50 of User core. May 27 03:33:32.779714 sshd[7077]: Connection closed by 139.178.89.65 port 34250 May 27 03:33:32.780768 sshd-session[7073]: pam_unix(sshd:session): session closed for user core May 27 03:33:32.787694 systemd[1]: sshd@52-157.180.65.55:22-139.178.89.65:34250.service: Deactivated successfully. May 27 03:33:32.791726 systemd[1]: session-50.scope: Deactivated successfully. May 27 03:33:32.793542 systemd-logind[1551]: Session 50 logged out. Waiting for processes to exit. May 27 03:33:32.796751 systemd-logind[1551]: Removed session 50. May 27 03:33:33.021511 kubelet[2917]: E0527 03:33:33.021297 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:33:37.953513 systemd[1]: Started sshd@53-157.180.65.55:22-139.178.89.65:55308.service - OpenSSH per-connection server daemon (139.178.89.65:55308). May 27 03:33:38.958195 sshd[7088]: Accepted publickey for core from 139.178.89.65 port 55308 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:33:38.960625 sshd-session[7088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:33:38.971142 systemd-logind[1551]: New session 51 of user core. May 27 03:33:38.978649 systemd[1]: Started session-51.scope - Session 51 of User core. May 27 03:33:39.022346 kubelet[2917]: E0527 03:33:39.022202 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:33:39.745222 sshd[7090]: Connection closed by 139.178.89.65 port 55308 May 27 03:33:39.746503 sshd-session[7088]: pam_unix(sshd:session): session closed for user core May 27 03:33:39.753734 systemd[1]: sshd@53-157.180.65.55:22-139.178.89.65:55308.service: Deactivated successfully. May 27 03:33:39.757394 systemd[1]: session-51.scope: Deactivated successfully. May 27 03:33:39.759796 systemd-logind[1551]: Session 51 logged out. Waiting for processes to exit. May 27 03:33:39.763335 systemd-logind[1551]: Removed session 51. May 27 03:33:44.919837 systemd[1]: Started sshd@54-157.180.65.55:22-139.178.89.65:33724.service - OpenSSH per-connection server daemon (139.178.89.65:33724). May 27 03:33:45.922355 sshd[7102]: Accepted publickey for core from 139.178.89.65 port 33724 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:33:45.925090 sshd-session[7102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:33:45.935390 systemd-logind[1551]: New session 52 of user core. May 27 03:33:45.939544 systemd[1]: Started session-52.scope - Session 52 of User core. May 27 03:33:46.710340 sshd[7104]: Connection closed by 139.178.89.65 port 33724 May 27 03:33:46.712242 sshd-session[7102]: pam_unix(sshd:session): session closed for user core May 27 03:33:46.717981 systemd-logind[1551]: Session 52 logged out. Waiting for processes to exit. May 27 03:33:46.718538 systemd[1]: sshd@54-157.180.65.55:22-139.178.89.65:33724.service: Deactivated successfully. May 27 03:33:46.720896 systemd[1]: session-52.scope: Deactivated successfully. May 27 03:33:46.724941 systemd-logind[1551]: Removed session 52. May 27 03:33:48.020998 kubelet[2917]: E0527 03:33:48.020442 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:33:51.886681 systemd[1]: Started sshd@55-157.180.65.55:22-139.178.89.65:33732.service - OpenSSH per-connection server daemon (139.178.89.65:33732). May 27 03:33:51.910543 containerd[1560]: time="2025-05-27T03:33:51.909783983Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"1c9423d1f3a93885eca49161699260bad5bb4cc9a4f0f5414093230db7c7b683\" pid:7127 exited_at:{seconds:1748316831 nanos:908846964}" May 27 03:33:52.881769 sshd[7141]: Accepted publickey for core from 139.178.89.65 port 33732 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:33:52.883750 sshd-session[7141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:33:52.892448 systemd-logind[1551]: New session 53 of user core. May 27 03:33:52.901793 systemd[1]: Started session-53.scope - Session 53 of User core. May 27 03:33:53.020913 kubelet[2917]: E0527 03:33:53.020878 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:33:53.627330 sshd[7143]: Connection closed by 139.178.89.65 port 33732 May 27 03:33:53.627844 sshd-session[7141]: pam_unix(sshd:session): session closed for user core May 27 03:33:53.633181 systemd-logind[1551]: Session 53 logged out. Waiting for processes to exit. May 27 03:33:53.634910 systemd[1]: sshd@55-157.180.65.55:22-139.178.89.65:33732.service: Deactivated successfully. May 27 03:33:53.637737 systemd[1]: session-53.scope: Deactivated successfully. May 27 03:33:53.640230 systemd-logind[1551]: Removed session 53. May 27 03:33:55.398156 containerd[1560]: time="2025-05-27T03:33:55.398084689Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"88c44d28fb2e35e80bbd18888054d5537ef4053574b6ab41784009d00b160ab6\" pid:7169 exited_at:{seconds:1748316835 nanos:397774197}" May 27 03:33:57.309858 containerd[1560]: time="2025-05-27T03:33:57.309812395Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"ef3a4297d8ef756658b7592da7cc154c5d46d4b9f7c72f987a3c607c17580a49\" pid:7190 exited_at:{seconds:1748316837 nanos:309370737}" May 27 03:33:58.803219 systemd[1]: Started sshd@56-157.180.65.55:22-139.178.89.65:35308.service - OpenSSH per-connection server daemon (139.178.89.65:35308). May 27 03:33:59.809429 sshd[7201]: Accepted publickey for core from 139.178.89.65 port 35308 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:33:59.812029 sshd-session[7201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:33:59.821682 systemd-logind[1551]: New session 54 of user core. May 27 03:33:59.829013 systemd[1]: Started session-54.scope - Session 54 of User core. May 27 03:34:00.576795 sshd[7203]: Connection closed by 139.178.89.65 port 35308 May 27 03:34:00.578119 sshd-session[7201]: pam_unix(sshd:session): session closed for user core May 27 03:34:00.580654 systemd-logind[1551]: Session 54 logged out. Waiting for processes to exit. May 27 03:34:00.581983 systemd[1]: sshd@56-157.180.65.55:22-139.178.89.65:35308.service: Deactivated successfully. May 27 03:34:00.584165 systemd[1]: session-54.scope: Deactivated successfully. May 27 03:34:00.586247 systemd-logind[1551]: Removed session 54. May 27 03:34:01.021717 kubelet[2917]: E0527 03:34:01.021636 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:34:05.024526 kubelet[2917]: E0527 03:34:05.022548 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:34:05.748920 systemd[1]: Started sshd@57-157.180.65.55:22-139.178.89.65:57504.service - OpenSSH per-connection server daemon (139.178.89.65:57504). May 27 03:34:06.726471 sshd[7238]: Accepted publickey for core from 139.178.89.65 port 57504 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:34:06.727938 sshd-session[7238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:34:06.733139 systemd-logind[1551]: New session 55 of user core. May 27 03:34:06.736616 systemd[1]: Started session-55.scope - Session 55 of User core. May 27 03:34:07.511015 sshd[7240]: Connection closed by 139.178.89.65 port 57504 May 27 03:34:07.511951 sshd-session[7238]: pam_unix(sshd:session): session closed for user core May 27 03:34:07.517798 systemd-logind[1551]: Session 55 logged out. Waiting for processes to exit. May 27 03:34:07.518761 systemd[1]: sshd@57-157.180.65.55:22-139.178.89.65:57504.service: Deactivated successfully. May 27 03:34:07.521724 systemd[1]: session-55.scope: Deactivated successfully. May 27 03:34:07.525476 systemd-logind[1551]: Removed session 55. May 27 03:34:12.684225 systemd[1]: Started sshd@58-157.180.65.55:22-139.178.89.65:57508.service - OpenSSH per-connection server daemon (139.178.89.65:57508). May 27 03:34:13.667021 sshd[7253]: Accepted publickey for core from 139.178.89.65 port 57508 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:34:13.669680 sshd-session[7253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:34:13.678486 systemd-logind[1551]: New session 56 of user core. May 27 03:34:13.694687 systemd[1]: Started session-56.scope - Session 56 of User core. May 27 03:34:14.457645 sshd[7255]: Connection closed by 139.178.89.65 port 57508 May 27 03:34:14.458419 sshd-session[7253]: pam_unix(sshd:session): session closed for user core May 27 03:34:14.463871 systemd-logind[1551]: Session 56 logged out. Waiting for processes to exit. May 27 03:34:14.465004 systemd[1]: sshd@58-157.180.65.55:22-139.178.89.65:57508.service: Deactivated successfully. May 27 03:34:14.467876 systemd[1]: session-56.scope: Deactivated successfully. May 27 03:34:14.470764 systemd-logind[1551]: Removed session 56. May 27 03:34:15.020926 kubelet[2917]: E0527 03:34:15.020844 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:34:16.022754 kubelet[2917]: E0527 03:34:16.022586 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:34:19.629968 systemd[1]: Started sshd@59-157.180.65.55:22-139.178.89.65:57826.service - OpenSSH per-connection server daemon (139.178.89.65:57826). May 27 03:34:20.627054 sshd[7267]: Accepted publickey for core from 139.178.89.65 port 57826 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:34:20.629362 sshd-session[7267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:34:20.639799 systemd-logind[1551]: New session 57 of user core. May 27 03:34:20.645571 systemd[1]: Started session-57.scope - Session 57 of User core. May 27 03:34:21.411750 sshd[7269]: Connection closed by 139.178.89.65 port 57826 May 27 03:34:21.413042 sshd-session[7267]: pam_unix(sshd:session): session closed for user core May 27 03:34:21.419249 systemd[1]: sshd@59-157.180.65.55:22-139.178.89.65:57826.service: Deactivated successfully. May 27 03:34:21.422411 systemd[1]: session-57.scope: Deactivated successfully. May 27 03:34:21.424846 systemd-logind[1551]: Session 57 logged out. Waiting for processes to exit. May 27 03:34:21.427265 systemd-logind[1551]: Removed session 57. May 27 03:34:21.869780 containerd[1560]: time="2025-05-27T03:34:21.869294441Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"59e5f6bd5b75414e6e481947fc5b2e05c26d574037d807dfc4f44b3c03402c6d\" pid:7292 exited_at:{seconds:1748316861 nanos:868750921}" May 27 03:34:26.037172 kubelet[2917]: E0527 03:34:26.037080 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:34:26.581086 systemd[1]: Started sshd@60-157.180.65.55:22-139.178.89.65:33734.service - OpenSSH per-connection server daemon (139.178.89.65:33734). May 27 03:34:27.020212 kubelet[2917]: E0527 03:34:27.020128 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:34:27.301489 containerd[1560]: time="2025-05-27T03:34:27.301280960Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"b0168ce4d9016db50dceb80f70239ef3cbe91060827d033e323292b2f16f2a62\" pid:7317 exited_at:{seconds:1748316867 nanos:300971979}" May 27 03:34:27.586450 sshd[7303]: Accepted publickey for core from 139.178.89.65 port 33734 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:34:27.588368 sshd-session[7303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:34:27.597508 systemd-logind[1551]: New session 58 of user core. May 27 03:34:27.602631 systemd[1]: Started session-58.scope - Session 58 of User core. May 27 03:34:28.461593 sshd[7327]: Connection closed by 139.178.89.65 port 33734 May 27 03:34:28.462562 sshd-session[7303]: pam_unix(sshd:session): session closed for user core May 27 03:34:28.468038 systemd[1]: sshd@60-157.180.65.55:22-139.178.89.65:33734.service: Deactivated successfully. May 27 03:34:28.471011 systemd[1]: session-58.scope: Deactivated successfully. May 27 03:34:28.473236 systemd-logind[1551]: Session 58 logged out. Waiting for processes to exit. May 27 03:34:28.475505 systemd-logind[1551]: Removed session 58. May 27 03:34:33.633008 systemd[1]: Started sshd@61-157.180.65.55:22-139.178.89.65:44668.service - OpenSSH per-connection server daemon (139.178.89.65:44668). May 27 03:34:34.625149 sshd[7341]: Accepted publickey for core from 139.178.89.65 port 44668 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:34:34.627823 sshd-session[7341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:34:34.637021 systemd-logind[1551]: New session 59 of user core. May 27 03:34:34.641548 systemd[1]: Started session-59.scope - Session 59 of User core. May 27 03:34:35.438683 sshd[7343]: Connection closed by 139.178.89.65 port 44668 May 27 03:34:35.442434 sshd-session[7341]: pam_unix(sshd:session): session closed for user core May 27 03:34:35.449578 systemd[1]: sshd@61-157.180.65.55:22-139.178.89.65:44668.service: Deactivated successfully. May 27 03:34:35.453281 systemd[1]: session-59.scope: Deactivated successfully. May 27 03:34:35.456267 systemd-logind[1551]: Session 59 logged out. Waiting for processes to exit. May 27 03:34:35.459467 systemd-logind[1551]: Removed session 59. May 27 03:34:38.021252 kubelet[2917]: E0527 03:34:38.020953 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:34:39.021585 kubelet[2917]: E0527 03:34:39.021446 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:34:40.612238 systemd[1]: Started sshd@62-157.180.65.55:22-139.178.89.65:44672.service - OpenSSH per-connection server daemon (139.178.89.65:44672). May 27 03:34:41.619972 sshd[7356]: Accepted publickey for core from 139.178.89.65 port 44672 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:34:41.621421 sshd-session[7356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:34:41.626150 systemd-logind[1551]: New session 60 of user core. May 27 03:34:41.632472 systemd[1]: Started session-60.scope - Session 60 of User core. May 27 03:34:42.477548 sshd[7358]: Connection closed by 139.178.89.65 port 44672 May 27 03:34:42.480468 sshd-session[7356]: pam_unix(sshd:session): session closed for user core May 27 03:34:42.486679 systemd[1]: sshd@62-157.180.65.55:22-139.178.89.65:44672.service: Deactivated successfully. May 27 03:34:42.489225 systemd[1]: session-60.scope: Deactivated successfully. May 27 03:34:42.491458 systemd-logind[1551]: Session 60 logged out. Waiting for processes to exit. May 27 03:34:42.492797 systemd-logind[1551]: Removed session 60. May 27 03:34:47.652090 systemd[1]: Started sshd@63-157.180.65.55:22-139.178.89.65:45858.service - OpenSSH per-connection server daemon (139.178.89.65:45858). May 27 03:34:48.654973 sshd[7370]: Accepted publickey for core from 139.178.89.65 port 45858 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:34:48.657584 sshd-session[7370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:34:48.668400 systemd-logind[1551]: New session 61 of user core. May 27 03:34:48.674454 systemd[1]: Started session-61.scope - Session 61 of User core. May 27 03:34:49.604570 sshd[7372]: Connection closed by 139.178.89.65 port 45858 May 27 03:34:49.606015 sshd-session[7370]: pam_unix(sshd:session): session closed for user core May 27 03:34:49.610641 systemd[1]: sshd@63-157.180.65.55:22-139.178.89.65:45858.service: Deactivated successfully. May 27 03:34:49.613164 systemd[1]: session-61.scope: Deactivated successfully. May 27 03:34:49.615106 systemd-logind[1551]: Session 61 logged out. Waiting for processes to exit. May 27 03:34:49.617006 systemd-logind[1551]: Removed session 61. May 27 03:34:50.030109 kubelet[2917]: E0527 03:34:50.030036 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:34:51.022437 kubelet[2917]: E0527 03:34:51.022353 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:34:51.859396 containerd[1560]: time="2025-05-27T03:34:51.859275794Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"735d40f2ecf7b25301d99f5536521ef3fe76ea793db65aaab201c7131a30c7fe\" pid:7397 exited_at:{seconds:1748316891 nanos:858796543}" May 27 03:34:54.781353 systemd[1]: Started sshd@64-157.180.65.55:22-139.178.89.65:57362.service - OpenSSH per-connection server daemon (139.178.89.65:57362). May 27 03:34:55.390183 containerd[1560]: time="2025-05-27T03:34:55.390118924Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"1f056f99244d395ee4ad4eaf93fa5923007129bb241362a63075a7e780f47cb4\" pid:7426 exited_at:{seconds:1748316895 nanos:389402429}" May 27 03:34:55.793287 sshd[7412]: Accepted publickey for core from 139.178.89.65 port 57362 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:34:55.796053 sshd-session[7412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:34:55.802613 systemd-logind[1551]: New session 62 of user core. May 27 03:34:55.810272 systemd[1]: Started session-62.scope - Session 62 of User core. May 27 03:34:56.775567 sshd[7434]: Connection closed by 139.178.89.65 port 57362 May 27 03:34:56.776678 sshd-session[7412]: pam_unix(sshd:session): session closed for user core May 27 03:34:56.781385 systemd[1]: sshd@64-157.180.65.55:22-139.178.89.65:57362.service: Deactivated successfully. May 27 03:34:56.784517 systemd[1]: session-62.scope: Deactivated successfully. May 27 03:34:56.786586 systemd-logind[1551]: Session 62 logged out. Waiting for processes to exit. May 27 03:34:56.789255 systemd-logind[1551]: Removed session 62. May 27 03:34:57.291192 containerd[1560]: time="2025-05-27T03:34:57.291048713Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"eb234c2d74164bb34907fb66304c831339a58952f876f8229e0060ff94e7a798\" pid:7458 exited_at:{seconds:1748316897 nanos:290417978}" May 27 03:35:01.947978 systemd[1]: Started sshd@65-157.180.65.55:22-139.178.89.65:57370.service - OpenSSH per-connection server daemon (139.178.89.65:57370). May 27 03:35:02.942030 sshd[7470]: Accepted publickey for core from 139.178.89.65 port 57370 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:35:02.944586 sshd-session[7470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:35:02.952555 systemd-logind[1551]: New session 63 of user core. May 27 03:35:02.959497 systemd[1]: Started session-63.scope - Session 63 of User core. May 27 03:35:03.021012 kubelet[2917]: E0527 03:35:03.020968 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:35:03.723343 sshd[7472]: Connection closed by 139.178.89.65 port 57370 May 27 03:35:03.724615 sshd-session[7470]: pam_unix(sshd:session): session closed for user core May 27 03:35:03.727645 systemd-logind[1551]: Session 63 logged out. Waiting for processes to exit. May 27 03:35:03.728385 systemd[1]: sshd@65-157.180.65.55:22-139.178.89.65:57370.service: Deactivated successfully. May 27 03:35:03.731995 systemd[1]: session-63.scope: Deactivated successfully. May 27 03:35:03.734451 systemd-logind[1551]: Removed session 63. May 27 03:35:04.026568 kubelet[2917]: E0527 03:35:04.025575 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:35:08.891973 systemd[1]: Started sshd@66-157.180.65.55:22-139.178.89.65:55854.service - OpenSSH per-connection server daemon (139.178.89.65:55854). May 27 03:35:09.886524 sshd[7484]: Accepted publickey for core from 139.178.89.65 port 55854 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:35:09.888740 sshd-session[7484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:35:09.897004 systemd-logind[1551]: New session 64 of user core. May 27 03:35:09.905465 systemd[1]: Started session-64.scope - Session 64 of User core. May 27 03:35:10.786512 sshd[7486]: Connection closed by 139.178.89.65 port 55854 May 27 03:35:10.789300 sshd-session[7484]: pam_unix(sshd:session): session closed for user core May 27 03:35:10.796687 systemd-logind[1551]: Session 64 logged out. Waiting for processes to exit. May 27 03:35:10.797694 systemd[1]: sshd@66-157.180.65.55:22-139.178.89.65:55854.service: Deactivated successfully. May 27 03:35:10.801512 systemd[1]: session-64.scope: Deactivated successfully. May 27 03:35:10.803970 systemd-logind[1551]: Removed session 64. May 27 03:35:15.958875 systemd[1]: Started sshd@67-157.180.65.55:22-139.178.89.65:40570.service - OpenSSH per-connection server daemon (139.178.89.65:40570). May 27 03:35:16.021009 kubelet[2917]: E0527 03:35:16.020959 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:35:16.984476 sshd[7497]: Accepted publickey for core from 139.178.89.65 port 40570 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:35:16.987669 sshd-session[7497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:35:16.997010 systemd-logind[1551]: New session 65 of user core. May 27 03:35:17.003604 systemd[1]: Started session-65.scope - Session 65 of User core. May 27 03:35:17.913098 sshd[7499]: Connection closed by 139.178.89.65 port 40570 May 27 03:35:17.917292 sshd-session[7497]: pam_unix(sshd:session): session closed for user core May 27 03:35:17.924906 systemd[1]: sshd@67-157.180.65.55:22-139.178.89.65:40570.service: Deactivated successfully. May 27 03:35:17.929135 systemd[1]: session-65.scope: Deactivated successfully. May 27 03:35:17.934261 systemd-logind[1551]: Session 65 logged out. Waiting for processes to exit. May 27 03:35:17.936619 systemd-logind[1551]: Removed session 65. May 27 03:35:19.022168 kubelet[2917]: E0527 03:35:19.022099 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:35:21.910561 containerd[1560]: time="2025-05-27T03:35:21.910479071Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"35b2b50ce4687a668e922285b24d114e304e1c89d3d0648d485ea2ddbe1d898b\" pid:7529 exited_at:{seconds:1748316921 nanos:910033835}" May 27 03:35:23.093701 systemd[1]: Started sshd@68-157.180.65.55:22-139.178.89.65:40584.service - OpenSSH per-connection server daemon (139.178.89.65:40584). May 27 03:35:24.121054 sshd[7542]: Accepted publickey for core from 139.178.89.65 port 40584 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:35:24.124636 sshd-session[7542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:35:24.133835 systemd-logind[1551]: New session 66 of user core. May 27 03:35:24.142524 systemd[1]: Started session-66.scope - Session 66 of User core. May 27 03:35:25.445458 sshd[7546]: Connection closed by 139.178.89.65 port 40584 May 27 03:35:25.446251 sshd-session[7542]: pam_unix(sshd:session): session closed for user core May 27 03:35:25.455012 systemd-logind[1551]: Session 66 logged out. Waiting for processes to exit. May 27 03:35:25.455157 systemd[1]: sshd@68-157.180.65.55:22-139.178.89.65:40584.service: Deactivated successfully. May 27 03:35:25.458968 systemd[1]: session-66.scope: Deactivated successfully. May 27 03:35:25.465349 systemd-logind[1551]: Removed session 66. May 27 03:35:27.272969 containerd[1560]: time="2025-05-27T03:35:27.272916087Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"b2abda8abce47e0319a8df858f6e5b001f3daa56501568126c1c953d1d46db98\" pid:7569 exited_at:{seconds:1748316927 nanos:272667730}" May 27 03:35:28.057049 kubelet[2917]: E0527 03:35:28.056946 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:35:30.627662 systemd[1]: Started sshd@69-157.180.65.55:22-139.178.89.65:57950.service - OpenSSH per-connection server daemon (139.178.89.65:57950). May 27 03:35:31.057257 containerd[1560]: time="2025-05-27T03:35:31.057171226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:35:31.363271 containerd[1560]: time="2025-05-27T03:35:31.363029424Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:35:31.364985 containerd[1560]: time="2025-05-27T03:35:31.364884878Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:35:31.365095 containerd[1560]: time="2025-05-27T03:35:31.364930163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:35:31.368518 kubelet[2917]: E0527 03:35:31.368380 2917 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:35:31.371255 kubelet[2917]: E0527 03:35:31.371178 2917 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:35:31.416716 kubelet[2917]: E0527 03:35:31.416537 2917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:bee51492bca3428982f094867f4c4710,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6jzjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-555bcbc6ff-596vx_calico-system(20923581-35ae-477b-83e9-35d75acd3c66): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:35:31.419428 containerd[1560]: time="2025-05-27T03:35:31.419371190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:35:31.644253 sshd[7579]: Accepted publickey for core from 139.178.89.65 port 57950 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:35:31.646513 sshd-session[7579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:35:31.655427 systemd-logind[1551]: New session 67 of user core. May 27 03:35:31.662096 systemd[1]: Started session-67.scope - Session 67 of User core. May 27 03:35:31.720433 containerd[1560]: time="2025-05-27T03:35:31.720343603Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:35:31.722429 containerd[1560]: time="2025-05-27T03:35:31.722093949Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:35:31.722429 containerd[1560]: time="2025-05-27T03:35:31.722257657Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:35:31.722852 kubelet[2917]: E0527 03:35:31.722768 2917 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:35:31.722965 kubelet[2917]: E0527 03:35:31.722860 2917 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:35:31.723171 kubelet[2917]: E0527 03:35:31.723084 2917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jzjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-555bcbc6ff-596vx_calico-system(20923581-35ae-477b-83e9-35d75acd3c66): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:35:31.724541 kubelet[2917]: E0527 03:35:31.724479 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:35:32.666190 sshd[7584]: Connection closed by 139.178.89.65 port 57950 May 27 03:35:32.669260 sshd-session[7579]: pam_unix(sshd:session): session closed for user core May 27 03:35:32.676262 systemd[1]: sshd@69-157.180.65.55:22-139.178.89.65:57950.service: Deactivated successfully. May 27 03:35:32.680092 systemd[1]: session-67.scope: Deactivated successfully. May 27 03:35:32.683505 systemd-logind[1551]: Session 67 logged out. Waiting for processes to exit. May 27 03:35:32.686112 systemd-logind[1551]: Removed session 67. May 27 03:35:37.836439 systemd[1]: Started sshd@70-157.180.65.55:22-139.178.89.65:41108.service - OpenSSH per-connection server daemon (139.178.89.65:41108). May 27 03:35:38.858510 sshd[7597]: Accepted publickey for core from 139.178.89.65 port 41108 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:35:38.861450 sshd-session[7597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:35:38.871452 systemd-logind[1551]: New session 68 of user core. May 27 03:35:38.880573 systemd[1]: Started session-68.scope - Session 68 of User core. May 27 03:35:39.669143 sshd[7607]: Connection closed by 139.178.89.65 port 41108 May 27 03:35:39.670110 sshd-session[7597]: pam_unix(sshd:session): session closed for user core May 27 03:35:39.673236 systemd[1]: sshd@70-157.180.65.55:22-139.178.89.65:41108.service: Deactivated successfully. May 27 03:35:39.675108 systemd[1]: session-68.scope: Deactivated successfully. May 27 03:35:39.677020 systemd-logind[1551]: Session 68 logged out. Waiting for processes to exit. May 27 03:35:39.678563 systemd-logind[1551]: Removed session 68. May 27 03:35:43.021749 containerd[1560]: time="2025-05-27T03:35:43.021669000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:35:43.341149 containerd[1560]: time="2025-05-27T03:35:43.340952349Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:35:43.342646 containerd[1560]: time="2025-05-27T03:35:43.342566319Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:35:43.342829 containerd[1560]: time="2025-05-27T03:35:43.342699910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:35:43.342950 kubelet[2917]: E0527 03:35:43.342902 2917 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:35:43.343496 kubelet[2917]: E0527 03:35:43.342972 2917 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:35:43.343718 kubelet[2917]: E0527 03:35:43.343594 2917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f7drb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-xwqrr_calico-system(9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:35:43.344942 kubelet[2917]: E0527 03:35:43.344895 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:35:44.023848 kubelet[2917]: E0527 03:35:44.023719 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:35:44.836434 systemd[1]: Started sshd@71-157.180.65.55:22-139.178.89.65:48742.service - OpenSSH per-connection server daemon (139.178.89.65:48742). May 27 03:35:45.803136 sshd[7633]: Accepted publickey for core from 139.178.89.65 port 48742 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:35:45.805135 sshd-session[7633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:35:45.812282 systemd-logind[1551]: New session 69 of user core. May 27 03:35:45.822559 systemd[1]: Started session-69.scope - Session 69 of User core. May 27 03:35:46.604985 sshd[7635]: Connection closed by 139.178.89.65 port 48742 May 27 03:35:46.606033 sshd-session[7633]: pam_unix(sshd:session): session closed for user core May 27 03:35:46.614014 systemd[1]: sshd@71-157.180.65.55:22-139.178.89.65:48742.service: Deactivated successfully. May 27 03:35:46.618271 systemd[1]: session-69.scope: Deactivated successfully. May 27 03:35:46.619773 systemd-logind[1551]: Session 69 logged out. Waiting for processes to exit. May 27 03:35:46.622141 systemd-logind[1551]: Removed session 69. May 27 03:35:51.780005 systemd[1]: Started sshd@72-157.180.65.55:22-139.178.89.65:48754.service - OpenSSH per-connection server daemon (139.178.89.65:48754). May 27 03:35:51.872588 containerd[1560]: time="2025-05-27T03:35:51.872413809Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"5ef42c3cbda35396fef7a2171cce4da9b2a0e07495365f631a83fb11b2fb537d\" pid:7662 exited_at:{seconds:1748316951 nanos:872053964}" May 27 03:35:52.773048 sshd[7647]: Accepted publickey for core from 139.178.89.65 port 48754 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:35:52.776012 sshd-session[7647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:35:52.785396 systemd-logind[1551]: New session 70 of user core. May 27 03:35:52.791544 systemd[1]: Started session-70.scope - Session 70 of User core. May 27 03:35:53.595746 sshd[7674]: Connection closed by 139.178.89.65 port 48754 May 27 03:35:53.596622 sshd-session[7647]: pam_unix(sshd:session): session closed for user core May 27 03:35:53.602002 systemd[1]: sshd@72-157.180.65.55:22-139.178.89.65:48754.service: Deactivated successfully. May 27 03:35:53.606076 systemd[1]: session-70.scope: Deactivated successfully. May 27 03:35:53.608922 systemd-logind[1551]: Session 70 logged out. Waiting for processes to exit. May 27 03:35:53.613052 systemd-logind[1551]: Removed session 70. May 27 03:35:55.020703 kubelet[2917]: E0527 03:35:55.020601 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:35:55.415626 containerd[1560]: time="2025-05-27T03:35:55.415545109Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"aa85c464737d73020465b7762bbcd8b185a048800e7cf59558d4d5209dd9ef6a\" pid:7701 exited_at:{seconds:1748316955 nanos:415016686}" May 27 03:35:57.317019 containerd[1560]: time="2025-05-27T03:35:57.316958227Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"285dec7cba09ac0fff8d2b3830f489fc152a5dbe214f8da9fb3522855d4f1423\" pid:7724 exited_at:{seconds:1748316957 nanos:316733515}" May 27 03:35:58.768007 systemd[1]: Started sshd@73-157.180.65.55:22-139.178.89.65:52964.service - OpenSSH per-connection server daemon (139.178.89.65:52964). May 27 03:35:59.023141 kubelet[2917]: E0527 03:35:59.022733 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:35:59.754862 sshd[7734]: Accepted publickey for core from 139.178.89.65 port 52964 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:35:59.760849 sshd-session[7734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:35:59.769428 systemd-logind[1551]: New session 71 of user core. May 27 03:35:59.778558 systemd[1]: Started session-71.scope - Session 71 of User core. May 27 03:36:01.238729 sshd[7736]: Connection closed by 139.178.89.65 port 52964 May 27 03:36:01.242697 sshd-session[7734]: pam_unix(sshd:session): session closed for user core May 27 03:36:01.251264 systemd[1]: sshd@73-157.180.65.55:22-139.178.89.65:52964.service: Deactivated successfully. May 27 03:36:01.253086 systemd[1]: session-71.scope: Deactivated successfully. May 27 03:36:01.254209 systemd-logind[1551]: Session 71 logged out. Waiting for processes to exit. May 27 03:36:01.255697 systemd-logind[1551]: Removed session 71. May 27 03:36:06.409725 systemd[1]: Started sshd@74-157.180.65.55:22-139.178.89.65:47198.service - OpenSSH per-connection server daemon (139.178.89.65:47198). May 27 03:36:07.021871 kubelet[2917]: E0527 03:36:07.021772 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:36:07.435288 sshd[7750]: Accepted publickey for core from 139.178.89.65 port 47198 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:36:07.438781 sshd-session[7750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:36:07.445665 systemd-logind[1551]: New session 72 of user core. May 27 03:36:07.452513 systemd[1]: Started session-72.scope - Session 72 of User core. May 27 03:36:08.244754 sshd[7752]: Connection closed by 139.178.89.65 port 47198 May 27 03:36:08.245768 sshd-session[7750]: pam_unix(sshd:session): session closed for user core May 27 03:36:08.251676 systemd[1]: sshd@74-157.180.65.55:22-139.178.89.65:47198.service: Deactivated successfully. May 27 03:36:08.255787 systemd[1]: session-72.scope: Deactivated successfully. May 27 03:36:08.257820 systemd-logind[1551]: Session 72 logged out. Waiting for processes to exit. May 27 03:36:08.261586 systemd-logind[1551]: Removed session 72. May 27 03:36:10.034317 kubelet[2917]: E0527 03:36:10.033677 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:36:13.423547 systemd[1]: Started sshd@75-157.180.65.55:22-139.178.89.65:37792.service - OpenSSH per-connection server daemon (139.178.89.65:37792). May 27 03:36:14.435399 sshd[7764]: Accepted publickey for core from 139.178.89.65 port 37792 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:36:14.441532 sshd-session[7764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:36:14.447906 systemd-logind[1551]: New session 73 of user core. May 27 03:36:14.452569 systemd[1]: Started session-73.scope - Session 73 of User core. May 27 03:36:15.489214 sshd[7766]: Connection closed by 139.178.89.65 port 37792 May 27 03:36:15.490129 sshd-session[7764]: pam_unix(sshd:session): session closed for user core May 27 03:36:15.496260 systemd[1]: sshd@75-157.180.65.55:22-139.178.89.65:37792.service: Deactivated successfully. May 27 03:36:15.499071 systemd[1]: session-73.scope: Deactivated successfully. May 27 03:36:15.501429 systemd-logind[1551]: Session 73 logged out. Waiting for processes to exit. May 27 03:36:15.504344 systemd-logind[1551]: Removed session 73. May 27 03:36:20.021633 kubelet[2917]: E0527 03:36:20.021337 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:36:20.670212 systemd[1]: Started sshd@76-157.180.65.55:22-139.178.89.65:37800.service - OpenSSH per-connection server daemon (139.178.89.65:37800). May 27 03:36:20.674680 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... May 27 03:36:20.720035 systemd-tmpfiles[7778]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 03:36:20.720060 systemd-tmpfiles[7778]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 03:36:20.720865 systemd-tmpfiles[7778]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 03:36:20.721581 systemd-tmpfiles[7778]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 03:36:20.723239 systemd-tmpfiles[7778]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 03:36:20.723748 systemd-tmpfiles[7778]: ACLs are not supported, ignoring. May 27 03:36:20.723831 systemd-tmpfiles[7778]: ACLs are not supported, ignoring. May 27 03:36:20.729987 systemd-tmpfiles[7778]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:36:20.730004 systemd-tmpfiles[7778]: Skipping /boot May 27 03:36:20.737971 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. May 27 03:36:20.738559 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. May 27 03:36:20.745609 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. May 27 03:36:21.663246 sshd[7777]: Accepted publickey for core from 139.178.89.65 port 37800 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:36:21.665770 sshd-session[7777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:36:21.673940 systemd-logind[1551]: New session 74 of user core. May 27 03:36:21.678568 systemd[1]: Started session-74.scope - Session 74 of User core. May 27 03:36:21.906673 containerd[1560]: time="2025-05-27T03:36:21.906604412Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"bd22109fe69c16e7cc5eada3a3543654b3e9a92c418b7006df67aca247367df7\" pid:7795 exited_at:{seconds:1748316981 nanos:905870033}" May 27 03:36:23.097562 sshd[7782]: Connection closed by 139.178.89.65 port 37800 May 27 03:36:23.101140 sshd-session[7777]: pam_unix(sshd:session): session closed for user core May 27 03:36:23.112908 systemd[1]: sshd@76-157.180.65.55:22-139.178.89.65:37800.service: Deactivated successfully. May 27 03:36:23.116914 systemd[1]: session-74.scope: Deactivated successfully. May 27 03:36:23.118083 systemd-logind[1551]: Session 74 logged out. Waiting for processes to exit. May 27 03:36:23.120519 systemd-logind[1551]: Removed session 74. May 27 03:36:24.035082 kubelet[2917]: E0527 03:36:24.034946 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:36:27.303740 containerd[1560]: time="2025-05-27T03:36:27.303595132Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"d6ce31451be0aaa675424a201e8f2f1bafaeeec08695b7c64b6bd99bb7da7903\" pid:7828 exited_at:{seconds:1748316987 nanos:303270252}" May 27 03:36:28.272035 systemd[1]: Started sshd@77-157.180.65.55:22-139.178.89.65:36210.service - OpenSSH per-connection server daemon (139.178.89.65:36210). May 27 03:36:29.331276 sshd[7838]: Accepted publickey for core from 139.178.89.65 port 36210 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:36:29.333030 sshd-session[7838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:36:29.338157 systemd-logind[1551]: New session 75 of user core. May 27 03:36:29.343533 systemd[1]: Started session-75.scope - Session 75 of User core. May 27 03:36:30.171919 sshd[7840]: Connection closed by 139.178.89.65 port 36210 May 27 03:36:30.172719 sshd-session[7838]: pam_unix(sshd:session): session closed for user core May 27 03:36:30.179902 systemd[1]: sshd@77-157.180.65.55:22-139.178.89.65:36210.service: Deactivated successfully. May 27 03:36:30.183948 systemd[1]: session-75.scope: Deactivated successfully. May 27 03:36:30.186939 systemd-logind[1551]: Session 75 logged out. Waiting for processes to exit. May 27 03:36:30.189440 systemd-logind[1551]: Removed session 75. May 27 03:36:34.020683 kubelet[2917]: E0527 03:36:34.020588 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:36:35.358790 systemd[1]: Started sshd@78-157.180.65.55:22-139.178.89.65:49054.service - OpenSSH per-connection server daemon (139.178.89.65:49054). May 27 03:36:36.373806 sshd[7854]: Accepted publickey for core from 139.178.89.65 port 49054 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:36:36.375714 sshd-session[7854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:36:36.379886 systemd-logind[1551]: New session 76 of user core. May 27 03:36:36.386441 systemd[1]: Started session-76.scope - Session 76 of User core. May 27 03:36:37.022773 kubelet[2917]: E0527 03:36:37.022568 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:36:37.257550 sshd[7856]: Connection closed by 139.178.89.65 port 49054 May 27 03:36:37.261510 sshd-session[7854]: pam_unix(sshd:session): session closed for user core May 27 03:36:37.267191 systemd-logind[1551]: Session 76 logged out. Waiting for processes to exit. May 27 03:36:37.267914 systemd[1]: sshd@78-157.180.65.55:22-139.178.89.65:49054.service: Deactivated successfully. May 27 03:36:37.270755 systemd[1]: session-76.scope: Deactivated successfully. May 27 03:36:37.274763 systemd-logind[1551]: Removed session 76. May 27 03:36:42.439691 systemd[1]: Started sshd@79-157.180.65.55:22-139.178.89.65:49058.service - OpenSSH per-connection server daemon (139.178.89.65:49058). May 27 03:36:43.454149 sshd[7869]: Accepted publickey for core from 139.178.89.65 port 49058 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:36:43.456042 sshd-session[7869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:36:43.466417 systemd-logind[1551]: New session 77 of user core. May 27 03:36:43.472582 systemd[1]: Started session-77.scope - Session 77 of User core. May 27 03:36:44.217936 sshd[7871]: Connection closed by 139.178.89.65 port 49058 May 27 03:36:44.218745 sshd-session[7869]: pam_unix(sshd:session): session closed for user core May 27 03:36:44.223134 systemd-logind[1551]: Session 77 logged out. Waiting for processes to exit. May 27 03:36:44.223915 systemd[1]: sshd@79-157.180.65.55:22-139.178.89.65:49058.service: Deactivated successfully. May 27 03:36:44.226256 systemd[1]: session-77.scope: Deactivated successfully. May 27 03:36:44.228782 systemd-logind[1551]: Removed session 77. May 27 03:36:47.020598 kubelet[2917]: E0527 03:36:47.020506 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:36:49.394879 systemd[1]: Started sshd@80-157.180.65.55:22-139.178.89.65:33786.service - OpenSSH per-connection server daemon (139.178.89.65:33786). May 27 03:36:50.022394 kubelet[2917]: E0527 03:36:50.022192 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:36:50.377070 sshd[7884]: Accepted publickey for core from 139.178.89.65 port 33786 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:36:50.379670 sshd-session[7884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:36:50.389437 systemd-logind[1551]: New session 78 of user core. May 27 03:36:50.393606 systemd[1]: Started session-78.scope - Session 78 of User core. May 27 03:36:51.498518 sshd[7886]: Connection closed by 139.178.89.65 port 33786 May 27 03:36:51.499614 sshd-session[7884]: pam_unix(sshd:session): session closed for user core May 27 03:36:51.506673 systemd-logind[1551]: Session 78 logged out. Waiting for processes to exit. May 27 03:36:51.506766 systemd[1]: sshd@80-157.180.65.55:22-139.178.89.65:33786.service: Deactivated successfully. May 27 03:36:51.511047 systemd[1]: session-78.scope: Deactivated successfully. May 27 03:36:51.514498 systemd-logind[1551]: Removed session 78. May 27 03:36:51.941200 containerd[1560]: time="2025-05-27T03:36:51.941077086Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"bfd6bd43925c192c741bb1b05294c5e4d41ab29f1ddf8b024640286e09e5c262\" pid:7909 exited_at:{seconds:1748317011 nanos:940489733}" May 27 03:36:55.408529 containerd[1560]: time="2025-05-27T03:36:55.408287819Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"0c1ea1b51e543aac51108bf078b3f6589da5f115328de563427e15a7f457639d\" pid:7936 exited_at:{seconds:1748317015 nanos:407789854}" May 27 03:36:56.671857 systemd[1]: Started sshd@81-157.180.65.55:22-139.178.89.65:44502.service - OpenSSH per-connection server daemon (139.178.89.65:44502). May 27 03:36:57.309622 containerd[1560]: time="2025-05-27T03:36:57.309560835Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"ab563d60d5770e0e123ce8e682f82f87f7fbabc06b792dbeffbd785d27a02588\" pid:7961 exited_at:{seconds:1748317017 nanos:308950880}" May 27 03:36:57.709835 sshd[7946]: Accepted publickey for core from 139.178.89.65 port 44502 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:36:57.712999 sshd-session[7946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:36:57.720453 systemd-logind[1551]: New session 79 of user core. May 27 03:36:57.727525 systemd[1]: Started session-79.scope - Session 79 of User core. May 27 03:36:59.087095 sshd[7970]: Connection closed by 139.178.89.65 port 44502 May 27 03:36:59.088023 sshd-session[7946]: pam_unix(sshd:session): session closed for user core May 27 03:36:59.093459 systemd[1]: sshd@81-157.180.65.55:22-139.178.89.65:44502.service: Deactivated successfully. May 27 03:36:59.096953 systemd[1]: session-79.scope: Deactivated successfully. May 27 03:36:59.101481 systemd-logind[1551]: Session 79 logged out. Waiting for processes to exit. May 27 03:36:59.104398 systemd-logind[1551]: Removed session 79. May 27 03:37:01.022212 kubelet[2917]: E0527 03:37:01.022094 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:37:02.020642 kubelet[2917]: E0527 03:37:02.020348 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:37:04.269799 systemd[1]: Started sshd@82-157.180.65.55:22-139.178.89.65:37824.service - OpenSSH per-connection server daemon (139.178.89.65:37824). May 27 03:37:05.269858 sshd[7984]: Accepted publickey for core from 139.178.89.65 port 37824 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:37:05.272939 sshd-session[7984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:37:05.283183 systemd-logind[1551]: New session 80 of user core. May 27 03:37:05.288589 systemd[1]: Started session-80.scope - Session 80 of User core. May 27 03:37:06.103209 sshd[7986]: Connection closed by 139.178.89.65 port 37824 May 27 03:37:06.104612 sshd-session[7984]: pam_unix(sshd:session): session closed for user core May 27 03:37:06.110490 systemd[1]: sshd@82-157.180.65.55:22-139.178.89.65:37824.service: Deactivated successfully. May 27 03:37:06.112161 systemd[1]: session-80.scope: Deactivated successfully. May 27 03:37:06.113706 systemd-logind[1551]: Session 80 logged out. Waiting for processes to exit. May 27 03:37:06.115583 systemd-logind[1551]: Removed session 80. May 27 03:37:06.275123 systemd[1]: Started sshd@83-157.180.65.55:22-139.178.89.65:37840.service - OpenSSH per-connection server daemon (139.178.89.65:37840). May 27 03:37:07.272943 sshd[7998]: Accepted publickey for core from 139.178.89.65 port 37840 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:37:07.274048 sshd-session[7998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:37:07.279477 systemd-logind[1551]: New session 81 of user core. May 27 03:37:07.285447 systemd[1]: Started session-81.scope - Session 81 of User core. May 27 03:37:08.238951 sshd[8000]: Connection closed by 139.178.89.65 port 37840 May 27 03:37:08.241956 sshd-session[7998]: pam_unix(sshd:session): session closed for user core May 27 03:37:08.248809 systemd-logind[1551]: Session 81 logged out. Waiting for processes to exit. May 27 03:37:08.250251 systemd[1]: sshd@83-157.180.65.55:22-139.178.89.65:37840.service: Deactivated successfully. May 27 03:37:08.254874 systemd[1]: session-81.scope: Deactivated successfully. May 27 03:37:08.260567 systemd-logind[1551]: Removed session 81. May 27 03:37:08.412243 systemd[1]: Started sshd@84-157.180.65.55:22-139.178.89.65:37844.service - OpenSSH per-connection server daemon (139.178.89.65:37844). May 27 03:37:09.452340 sshd[8010]: Accepted publickey for core from 139.178.89.65 port 37844 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:37:09.455091 sshd-session[8010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:37:09.460989 systemd-logind[1551]: New session 82 of user core. May 27 03:37:09.466506 systemd[1]: Started session-82.scope - Session 82 of User core. May 27 03:37:12.719694 sshd[8013]: Connection closed by 139.178.89.65 port 37844 May 27 03:37:12.743695 sshd-session[8010]: pam_unix(sshd:session): session closed for user core May 27 03:37:12.767234 systemd[1]: sshd@84-157.180.65.55:22-139.178.89.65:37844.service: Deactivated successfully. May 27 03:37:12.769920 systemd[1]: session-82.scope: Deactivated successfully. May 27 03:37:12.770206 systemd[1]: session-82.scope: Consumed 605ms CPU time, 87.2M memory peak. May 27 03:37:12.771379 systemd-logind[1551]: Session 82 logged out. Waiting for processes to exit. May 27 03:37:12.774185 systemd-logind[1551]: Removed session 82. May 27 03:37:12.893584 systemd[1]: Started sshd@85-157.180.65.55:22-139.178.89.65:37850.service - OpenSSH per-connection server daemon (139.178.89.65:37850). May 27 03:37:13.187477 kubelet[2917]: E0527 03:37:13.187409 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:37:13.897603 sshd[8032]: Accepted publickey for core from 139.178.89.65 port 37850 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:37:13.900089 sshd-session[8032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:37:13.908827 systemd-logind[1551]: New session 83 of user core. May 27 03:37:13.916673 systemd[1]: Started session-83.scope - Session 83 of User core. May 27 03:37:15.754888 sshd[8034]: Connection closed by 139.178.89.65 port 37850 May 27 03:37:15.755949 sshd-session[8032]: pam_unix(sshd:session): session closed for user core May 27 03:37:15.762273 systemd[1]: sshd@85-157.180.65.55:22-139.178.89.65:37850.service: Deactivated successfully. May 27 03:37:15.766956 systemd[1]: session-83.scope: Deactivated successfully. May 27 03:37:15.767248 systemd[1]: session-83.scope: Consumed 870ms CPU time, 68.4M memory peak. May 27 03:37:15.771553 systemd-logind[1551]: Session 83 logged out. Waiting for processes to exit. May 27 03:37:15.775294 systemd-logind[1551]: Removed session 83. May 27 03:37:15.931539 systemd[1]: Started sshd@86-157.180.65.55:22-139.178.89.65:43852.service - OpenSSH per-connection server daemon (139.178.89.65:43852). May 27 03:37:16.068275 kubelet[2917]: E0527 03:37:16.065810 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:37:17.014084 sshd[8044]: Accepted publickey for core from 139.178.89.65 port 43852 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:37:17.017211 sshd-session[8044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:37:17.022972 systemd-logind[1551]: New session 84 of user core. May 27 03:37:17.026498 systemd[1]: Started session-84.scope - Session 84 of User core. May 27 03:37:17.852323 sshd[8046]: Connection closed by 139.178.89.65 port 43852 May 27 03:37:17.852873 sshd-session[8044]: pam_unix(sshd:session): session closed for user core May 27 03:37:17.855895 systemd[1]: sshd@86-157.180.65.55:22-139.178.89.65:43852.service: Deactivated successfully. May 27 03:37:17.857595 systemd[1]: session-84.scope: Deactivated successfully. May 27 03:37:17.859028 systemd-logind[1551]: Session 84 logged out. Waiting for processes to exit. May 27 03:37:17.859909 systemd-logind[1551]: Removed session 84. May 27 03:37:22.619686 containerd[1560]: time="2025-05-27T03:37:22.619582962Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"5d4312a68772a8a5cd5472fecf31921ea8057612abdb4e3cb4665eb1619d053b\" pid:8089 exited_at:{seconds:1748317042 nanos:527262284}" May 27 03:37:23.032084 systemd[1]: Started sshd@87-157.180.65.55:22-139.178.89.65:43868.service - OpenSSH per-connection server daemon (139.178.89.65:43868). May 27 03:37:24.054914 kubelet[2917]: E0527 03:37:24.053830 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:37:24.096007 sshd[8110]: Accepted publickey for core from 139.178.89.65 port 43868 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:37:24.099672 sshd-session[8110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:37:24.109432 systemd-logind[1551]: New session 85 of user core. May 27 03:37:24.115571 systemd[1]: Started session-85.scope - Session 85 of User core. May 27 03:37:25.354872 sshd[8112]: Connection closed by 139.178.89.65 port 43868 May 27 03:37:25.362141 sshd-session[8110]: pam_unix(sshd:session): session closed for user core May 27 03:37:25.378116 systemd-logind[1551]: Session 85 logged out. Waiting for processes to exit. May 27 03:37:25.378358 systemd[1]: sshd@87-157.180.65.55:22-139.178.89.65:43868.service: Deactivated successfully. May 27 03:37:25.383280 systemd[1]: session-85.scope: Deactivated successfully. May 27 03:37:25.387110 systemd-logind[1551]: Removed session 85. May 27 03:37:27.342987 containerd[1560]: time="2025-05-27T03:37:27.333809305Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"036e06e443a44bb13e5328cecda9e5f25148fba4acf420f71a954dae0ebc588e\" pid:8136 exited_at:{seconds:1748317047 nanos:333204679}" May 27 03:37:29.021977 kubelet[2917]: E0527 03:37:29.021846 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:37:30.524287 systemd[1]: Started sshd@88-157.180.65.55:22-139.178.89.65:38578.service - OpenSSH per-connection server daemon (139.178.89.65:38578). May 27 03:37:31.537781 sshd[8146]: Accepted publickey for core from 139.178.89.65 port 38578 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:37:31.541793 sshd-session[8146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:37:31.551247 systemd-logind[1551]: New session 86 of user core. May 27 03:37:31.559245 systemd[1]: Started session-86.scope - Session 86 of User core. May 27 03:37:32.819775 sshd[8150]: Connection closed by 139.178.89.65 port 38578 May 27 03:37:32.823235 sshd-session[8146]: pam_unix(sshd:session): session closed for user core May 27 03:37:32.845932 systemd[1]: sshd@88-157.180.65.55:22-139.178.89.65:38578.service: Deactivated successfully. May 27 03:37:32.846412 systemd-logind[1551]: Session 86 logged out. Waiting for processes to exit. May 27 03:37:32.850359 systemd[1]: session-86.scope: Deactivated successfully. May 27 03:37:32.852845 systemd-logind[1551]: Removed session 86. May 27 03:37:35.088014 kubelet[2917]: E0527 03:37:35.087936 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:37:37.995852 systemd[1]: Started sshd@89-157.180.65.55:22-139.178.89.65:45690.service - OpenSSH per-connection server daemon (139.178.89.65:45690). May 27 03:37:39.065272 sshd[8162]: Accepted publickey for core from 139.178.89.65 port 45690 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:37:39.067183 sshd-session[8162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:37:39.074373 systemd-logind[1551]: New session 87 of user core. May 27 03:37:39.081430 systemd[1]: Started session-87.scope - Session 87 of User core. May 27 03:37:40.026533 kubelet[2917]: E0527 03:37:40.026471 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:37:40.074019 sshd[8164]: Connection closed by 139.178.89.65 port 45690 May 27 03:37:40.077084 sshd-session[8162]: pam_unix(sshd:session): session closed for user core May 27 03:37:40.080587 systemd-logind[1551]: Session 87 logged out. Waiting for processes to exit. May 27 03:37:40.082628 systemd[1]: sshd@89-157.180.65.55:22-139.178.89.65:45690.service: Deactivated successfully. May 27 03:37:40.086645 systemd[1]: session-87.scope: Deactivated successfully. May 27 03:37:40.090588 systemd-logind[1551]: Removed session 87. May 27 03:37:45.249076 systemd[1]: Started sshd@90-157.180.65.55:22-139.178.89.65:49568.service - OpenSSH per-connection server daemon (139.178.89.65:49568). May 27 03:37:46.270560 sshd[8176]: Accepted publickey for core from 139.178.89.65 port 49568 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:37:46.272953 sshd-session[8176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:37:46.278181 systemd-logind[1551]: New session 88 of user core. May 27 03:37:46.286528 systemd[1]: Started session-88.scope - Session 88 of User core. May 27 03:37:47.021189 kubelet[2917]: E0527 03:37:47.020612 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:37:47.420255 sshd[8178]: Connection closed by 139.178.89.65 port 49568 May 27 03:37:47.421212 sshd-session[8176]: pam_unix(sshd:session): session closed for user core May 27 03:37:47.427228 systemd[1]: sshd@90-157.180.65.55:22-139.178.89.65:49568.service: Deactivated successfully. May 27 03:37:47.431278 systemd[1]: session-88.scope: Deactivated successfully. May 27 03:37:47.433428 systemd-logind[1551]: Session 88 logged out. Waiting for processes to exit. May 27 03:37:47.435999 systemd-logind[1551]: Removed session 88. May 27 03:37:52.335669 containerd[1560]: time="2025-05-27T03:37:52.335583133Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"61eb3fef8d5f4a8a4fbe8628a41eb475fc4e8b5ecf36b22d271e486e81cf5a99\" pid:8201 exited_at:{seconds:1748317072 nanos:191030290}" May 27 03:37:52.601554 systemd[1]: Started sshd@91-157.180.65.55:22-139.178.89.65:49574.service - OpenSSH per-connection server daemon (139.178.89.65:49574). May 27 03:37:53.020967 kubelet[2917]: E0527 03:37:53.020836 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:37:53.682813 sshd[8216]: Accepted publickey for core from 139.178.89.65 port 49574 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:37:53.687439 sshd-session[8216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:37:53.695971 systemd-logind[1551]: New session 89 of user core. May 27 03:37:53.702485 systemd[1]: Started session-89.scope - Session 89 of User core. May 27 03:37:55.429696 containerd[1560]: time="2025-05-27T03:37:55.429354699Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"cf913191872dd9ffa7189c8c3288d68eda0dc78dc9d31a7de2645845125b6908\" pid:8240 exited_at:{seconds:1748317075 nanos:428644084}" May 27 03:37:55.676680 sshd[8218]: Connection closed by 139.178.89.65 port 49574 May 27 03:37:55.680695 sshd-session[8216]: pam_unix(sshd:session): session closed for user core May 27 03:37:55.691113 systemd[1]: sshd@91-157.180.65.55:22-139.178.89.65:49574.service: Deactivated successfully. May 27 03:37:55.696056 systemd[1]: session-89.scope: Deactivated successfully. May 27 03:37:55.706529 systemd-logind[1551]: Session 89 logged out. Waiting for processes to exit. May 27 03:37:55.709690 systemd-logind[1551]: Removed session 89. May 27 03:37:57.330549 containerd[1560]: time="2025-05-27T03:37:57.330496183Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"8223df6751bc5b414fb7fd34aaeffd63e304c145b38f995316a003942f723789\" pid:8265 exited_at:{seconds:1748317077 nanos:329824172}" May 27 03:38:00.034085 kubelet[2917]: E0527 03:38:00.034003 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:38:00.848003 systemd[1]: Started sshd@92-157.180.65.55:22-139.178.89.65:57392.service - OpenSSH per-connection server daemon (139.178.89.65:57392). May 27 03:38:01.859650 sshd[8274]: Accepted publickey for core from 139.178.89.65 port 57392 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:38:01.861897 sshd-session[8274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:38:01.870004 systemd-logind[1551]: New session 90 of user core. May 27 03:38:01.879587 systemd[1]: Started session-90.scope - Session 90 of User core. May 27 03:38:02.870281 sshd[8278]: Connection closed by 139.178.89.65 port 57392 May 27 03:38:02.871412 sshd-session[8274]: pam_unix(sshd:session): session closed for user core May 27 03:38:02.878661 systemd[1]: sshd@92-157.180.65.55:22-139.178.89.65:57392.service: Deactivated successfully. May 27 03:38:02.881240 systemd[1]: session-90.scope: Deactivated successfully. May 27 03:38:02.882968 systemd-logind[1551]: Session 90 logged out. Waiting for processes to exit. May 27 03:38:02.885410 systemd-logind[1551]: Removed session 90. May 27 03:38:06.023210 kubelet[2917]: E0527 03:38:06.023047 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:38:08.044694 systemd[1]: Started sshd@93-157.180.65.55:22-139.178.89.65:45844.service - OpenSSH per-connection server daemon (139.178.89.65:45844). May 27 03:38:09.060487 sshd[8290]: Accepted publickey for core from 139.178.89.65 port 45844 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:38:09.062836 sshd-session[8290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:38:09.075381 systemd-logind[1551]: New session 91 of user core. May 27 03:38:09.078673 systemd[1]: Started session-91.scope - Session 91 of User core. May 27 03:38:10.071086 sshd[8292]: Connection closed by 139.178.89.65 port 45844 May 27 03:38:10.072935 sshd-session[8290]: pam_unix(sshd:session): session closed for user core May 27 03:38:10.082242 systemd[1]: sshd@93-157.180.65.55:22-139.178.89.65:45844.service: Deactivated successfully. May 27 03:38:10.085898 systemd[1]: session-91.scope: Deactivated successfully. May 27 03:38:10.088388 systemd-logind[1551]: Session 91 logged out. Waiting for processes to exit. May 27 03:38:10.091335 systemd-logind[1551]: Removed session 91. May 27 03:38:11.020620 kubelet[2917]: E0527 03:38:11.020572 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:38:15.244021 systemd[1]: Started sshd@94-157.180.65.55:22-139.178.89.65:49616.service - OpenSSH per-connection server daemon (139.178.89.65:49616). May 27 03:38:16.266951 sshd[8304]: Accepted publickey for core from 139.178.89.65 port 49616 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:38:16.269242 sshd-session[8304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:38:16.276424 systemd-logind[1551]: New session 92 of user core. May 27 03:38:16.282610 systemd[1]: Started session-92.scope - Session 92 of User core. May 27 03:38:17.063411 sshd[8306]: Connection closed by 139.178.89.65 port 49616 May 27 03:38:17.064596 sshd-session[8304]: pam_unix(sshd:session): session closed for user core May 27 03:38:17.069267 systemd-logind[1551]: Session 92 logged out. Waiting for processes to exit. May 27 03:38:17.069424 systemd[1]: sshd@94-157.180.65.55:22-139.178.89.65:49616.service: Deactivated successfully. May 27 03:38:17.071810 systemd[1]: session-92.scope: Deactivated successfully. May 27 03:38:17.073781 systemd-logind[1551]: Removed session 92. May 27 03:38:19.021860 kubelet[2917]: E0527 03:38:19.021793 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:38:22.048100 containerd[1560]: time="2025-05-27T03:38:22.048052973Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"af4a627c6f49128945783ca9a464e0f3f8c16bab591a0ea33e4dd38256198853\" pid:8336 exited_at:{seconds:1748317102 nanos:47355223}" May 27 03:38:22.248236 systemd[1]: Started sshd@95-157.180.65.55:22-139.178.89.65:49630.service - OpenSSH per-connection server daemon (139.178.89.65:49630). May 27 03:38:23.298990 sshd[8349]: Accepted publickey for core from 139.178.89.65 port 49630 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:38:23.304248 sshd-session[8349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:38:23.312759 systemd-logind[1551]: New session 93 of user core. May 27 03:38:23.322707 systemd[1]: Started session-93.scope - Session 93 of User core. May 27 03:38:24.614214 sshd[8353]: Connection closed by 139.178.89.65 port 49630 May 27 03:38:24.615254 sshd-session[8349]: pam_unix(sshd:session): session closed for user core May 27 03:38:24.624024 systemd-logind[1551]: Session 93 logged out. Waiting for processes to exit. May 27 03:38:24.624686 systemd[1]: sshd@95-157.180.65.55:22-139.178.89.65:49630.service: Deactivated successfully. May 27 03:38:24.627809 systemd[1]: session-93.scope: Deactivated successfully. May 27 03:38:24.630299 systemd-logind[1551]: Removed session 93. May 27 03:38:25.020745 kubelet[2917]: E0527 03:38:25.020642 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:38:27.296029 containerd[1560]: time="2025-05-27T03:38:27.295879012Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"61093bd7cf431d0a09389d68f581fbfaa7ca170fdfd6271ed06e6b01ce722864\" pid:8377 exited_at:{seconds:1748317107 nanos:295180319}" May 27 03:38:29.789553 systemd[1]: Started sshd@96-157.180.65.55:22-139.178.89.65:34156.service - OpenSSH per-connection server daemon (139.178.89.65:34156). May 27 03:38:30.023247 kubelet[2917]: E0527 03:38:30.022691 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:38:30.838191 sshd[8388]: Accepted publickey for core from 139.178.89.65 port 34156 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:38:30.841463 sshd-session[8388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:38:30.851630 systemd-logind[1551]: New session 94 of user core. May 27 03:38:30.857581 systemd[1]: Started session-94.scope - Session 94 of User core. May 27 03:38:31.999567 sshd[8390]: Connection closed by 139.178.89.65 port 34156 May 27 03:38:32.000053 sshd-session[8388]: pam_unix(sshd:session): session closed for user core May 27 03:38:32.008703 systemd[1]: sshd@96-157.180.65.55:22-139.178.89.65:34156.service: Deactivated successfully. May 27 03:38:32.011660 systemd[1]: session-94.scope: Deactivated successfully. May 27 03:38:32.018601 systemd-logind[1551]: Session 94 logged out. Waiting for processes to exit. May 27 03:38:32.020765 systemd-logind[1551]: Removed session 94. May 27 03:38:37.170137 systemd[1]: Started sshd@97-157.180.65.55:22-139.178.89.65:60258.service - OpenSSH per-connection server daemon (139.178.89.65:60258). May 27 03:38:38.212176 sshd[8403]: Accepted publickey for core from 139.178.89.65 port 60258 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:38:38.214664 sshd-session[8403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:38:38.223396 systemd-logind[1551]: New session 95 of user core. May 27 03:38:38.228609 systemd[1]: Started session-95.scope - Session 95 of User core. May 27 03:38:39.123420 sshd[8405]: Connection closed by 139.178.89.65 port 60258 May 27 03:38:39.132035 sshd-session[8403]: pam_unix(sshd:session): session closed for user core May 27 03:38:39.145236 systemd[1]: sshd@97-157.180.65.55:22-139.178.89.65:60258.service: Deactivated successfully. May 27 03:38:39.149126 systemd[1]: session-95.scope: Deactivated successfully. May 27 03:38:39.150773 systemd-logind[1551]: Session 95 logged out. Waiting for processes to exit. May 27 03:38:39.153267 systemd-logind[1551]: Removed session 95. May 27 03:38:40.021053 kubelet[2917]: E0527 03:38:40.020697 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:38:41.021208 kubelet[2917]: E0527 03:38:41.021155 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:38:44.292728 systemd[1]: Started sshd@98-157.180.65.55:22-139.178.89.65:52672.service - OpenSSH per-connection server daemon (139.178.89.65:52672). May 27 03:38:45.348485 sshd[8417]: Accepted publickey for core from 139.178.89.65 port 52672 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:38:45.351486 sshd-session[8417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:38:45.359601 systemd-logind[1551]: New session 96 of user core. May 27 03:38:45.368611 systemd[1]: Started session-96.scope - Session 96 of User core. May 27 03:38:46.770926 sshd[8419]: Connection closed by 139.178.89.65 port 52672 May 27 03:38:46.771879 sshd-session[8417]: pam_unix(sshd:session): session closed for user core May 27 03:38:46.778059 systemd[1]: sshd@98-157.180.65.55:22-139.178.89.65:52672.service: Deactivated successfully. May 27 03:38:46.782057 systemd[1]: session-96.scope: Deactivated successfully. May 27 03:38:46.784001 systemd-logind[1551]: Session 96 logged out. Waiting for processes to exit. May 27 03:38:46.786903 systemd-logind[1551]: Removed session 96. May 27 03:38:51.922666 containerd[1560]: time="2025-05-27T03:38:51.922608510Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"f7c4c428c21664542e77fd0a450e9272fa912a17620219e7a72036916ab0cf3c\" pid:8442 exited_at:{seconds:1748317131 nanos:922268052}" May 27 03:38:51.938175 systemd[1]: Started sshd@99-157.180.65.55:22-139.178.89.65:52678.service - OpenSSH per-connection server daemon (139.178.89.65:52678). May 27 03:38:52.021232 kubelet[2917]: E0527 03:38:52.021177 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:38:52.976502 sshd[8454]: Accepted publickey for core from 139.178.89.65 port 52678 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:38:52.980382 sshd-session[8454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:38:52.989989 systemd-logind[1551]: New session 97 of user core. May 27 03:38:52.994651 systemd[1]: Started session-97.scope - Session 97 of User core. May 27 03:38:54.711846 sshd[8456]: Connection closed by 139.178.89.65 port 52678 May 27 03:38:54.713936 sshd-session[8454]: pam_unix(sshd:session): session closed for user core May 27 03:38:54.726354 systemd[1]: sshd@99-157.180.65.55:22-139.178.89.65:52678.service: Deactivated successfully. May 27 03:38:54.728908 systemd[1]: session-97.scope: Deactivated successfully. May 27 03:38:54.732862 systemd-logind[1551]: Session 97 logged out. Waiting for processes to exit. May 27 03:38:54.734632 systemd-logind[1551]: Removed session 97. May 27 03:38:55.020016 kubelet[2917]: E0527 03:38:55.019844 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:38:55.460744 containerd[1560]: time="2025-05-27T03:38:55.460696273Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"e1f6e7fc9d5f195ad0e0398ea33ed7c3b63ce558dc61d969b5f77a368c5dbeec\" pid:8489 exited_at:{seconds:1748317135 nanos:458613191}" May 27 03:38:57.287560 containerd[1560]: time="2025-05-27T03:38:57.287470911Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"5ec311d815621066208242d53e71f12ceda17defda3f4c22255db4ce42cffdc8\" pid:8511 exited_at:{seconds:1748317137 nanos:286560375}" May 27 03:38:59.887635 systemd[1]: Started sshd@100-157.180.65.55:22-139.178.89.65:57050.service - OpenSSH per-connection server daemon (139.178.89.65:57050). May 27 03:39:00.916583 sshd[8535]: Accepted publickey for core from 139.178.89.65 port 57050 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:39:00.918923 sshd-session[8535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:39:00.926136 systemd-logind[1551]: New session 98 of user core. May 27 03:39:00.933665 systemd[1]: Started session-98.scope - Session 98 of User core. May 27 03:39:01.999022 sshd[8537]: Connection closed by 139.178.89.65 port 57050 May 27 03:39:01.999845 sshd-session[8535]: pam_unix(sshd:session): session closed for user core May 27 03:39:02.005831 systemd[1]: sshd@100-157.180.65.55:22-139.178.89.65:57050.service: Deactivated successfully. May 27 03:39:02.016659 systemd[1]: session-98.scope: Deactivated successfully. May 27 03:39:02.018644 systemd-logind[1551]: Session 98 logged out. Waiting for processes to exit. May 27 03:39:02.022180 systemd-logind[1551]: Removed session 98. May 27 03:39:05.022267 kubelet[2917]: E0527 03:39:05.022114 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:39:07.177121 systemd[1]: Started sshd@101-157.180.65.55:22-139.178.89.65:53718.service - OpenSSH per-connection server daemon (139.178.89.65:53718). May 27 03:39:08.225959 sshd[8551]: Accepted publickey for core from 139.178.89.65 port 53718 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:39:08.228546 sshd-session[8551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:39:08.239434 systemd-logind[1551]: New session 99 of user core. May 27 03:39:08.248656 systemd[1]: Started session-99.scope - Session 99 of User core. May 27 03:39:09.044329 sshd[8553]: Connection closed by 139.178.89.65 port 53718 May 27 03:39:09.045222 sshd-session[8551]: pam_unix(sshd:session): session closed for user core May 27 03:39:09.051951 systemd[1]: sshd@101-157.180.65.55:22-139.178.89.65:53718.service: Deactivated successfully. May 27 03:39:09.056762 systemd[1]: session-99.scope: Deactivated successfully. May 27 03:39:09.060662 systemd-logind[1551]: Session 99 logged out. Waiting for processes to exit. May 27 03:39:09.063881 systemd-logind[1551]: Removed session 99. May 27 03:39:10.021381 kubelet[2917]: E0527 03:39:10.020889 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:39:14.212797 systemd[1]: Started sshd@102-157.180.65.55:22-139.178.89.65:41658.service - OpenSSH per-connection server daemon (139.178.89.65:41658). May 27 03:39:15.213977 sshd[8566]: Accepted publickey for core from 139.178.89.65 port 41658 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:39:15.217778 sshd-session[8566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:39:15.232851 systemd-logind[1551]: New session 100 of user core. May 27 03:39:15.239886 systemd[1]: Started session-100.scope - Session 100 of User core. May 27 03:39:16.869804 sshd[8569]: Connection closed by 139.178.89.65 port 41658 May 27 03:39:16.873036 sshd-session[8566]: pam_unix(sshd:session): session closed for user core May 27 03:39:16.883797 systemd[1]: sshd@102-157.180.65.55:22-139.178.89.65:41658.service: Deactivated successfully. May 27 03:39:16.885270 systemd[1]: session-100.scope: Deactivated successfully. May 27 03:39:16.886021 systemd-logind[1551]: Session 100 logged out. Waiting for processes to exit. May 27 03:39:16.887342 systemd-logind[1551]: Removed session 100. May 27 03:39:17.132662 kubelet[2917]: E0527 03:39:17.132297 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:39:22.033105 kubelet[2917]: E0527 03:39:22.032925 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:39:22.046708 systemd[1]: Started sshd@103-157.180.65.55:22-139.178.89.65:41662.service - OpenSSH per-connection server daemon (139.178.89.65:41662). May 27 03:39:22.079922 containerd[1560]: time="2025-05-27T03:39:22.069505235Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"2080dd1af1b113fa6c2271f1ae4c79cd6e0122e1b227cd14c599c82905746191\" pid:8593 exited_at:{seconds:1748317162 nanos:68868662}" May 27 03:39:23.089382 sshd[8605]: Accepted publickey for core from 139.178.89.65 port 41662 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:39:23.092269 sshd-session[8605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:39:23.100422 systemd-logind[1551]: New session 101 of user core. May 27 03:39:23.107490 systemd[1]: Started session-101.scope - Session 101 of User core. May 27 03:39:24.706033 sshd[8607]: Connection closed by 139.178.89.65 port 41662 May 27 03:39:24.706686 sshd-session[8605]: pam_unix(sshd:session): session closed for user core May 27 03:39:24.716863 systemd[1]: sshd@103-157.180.65.55:22-139.178.89.65:41662.service: Deactivated successfully. May 27 03:39:24.719212 systemd[1]: session-101.scope: Deactivated successfully. May 27 03:39:24.720733 systemd-logind[1551]: Session 101 logged out. Waiting for processes to exit. May 27 03:39:24.723057 systemd-logind[1551]: Removed session 101. May 27 03:39:27.338491 containerd[1560]: time="2025-05-27T03:39:27.338437500Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"022ab0ca3836fee57dab2f2676fd3107f46602929944218ed86eb9cd268b59bc\" pid:8631 exited_at:{seconds:1748317167 nanos:338028934}" May 27 03:39:29.878738 systemd[1]: Started sshd@104-157.180.65.55:22-139.178.89.65:59728.service - OpenSSH per-connection server daemon (139.178.89.65:59728). May 27 03:39:30.023058 kubelet[2917]: E0527 03:39:30.022637 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:39:30.885550 sshd[8641]: Accepted publickey for core from 139.178.89.65 port 59728 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:39:30.887858 sshd-session[8641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:39:30.895377 systemd-logind[1551]: New session 102 of user core. May 27 03:39:30.902465 systemd[1]: Started session-102.scope - Session 102 of User core. May 27 03:39:32.241842 sshd[8643]: Connection closed by 139.178.89.65 port 59728 May 27 03:39:32.243105 sshd-session[8641]: pam_unix(sshd:session): session closed for user core May 27 03:39:32.250142 systemd[1]: sshd@104-157.180.65.55:22-139.178.89.65:59728.service: Deactivated successfully. May 27 03:39:32.253592 systemd[1]: session-102.scope: Deactivated successfully. May 27 03:39:32.257108 systemd-logind[1551]: Session 102 logged out. Waiting for processes to exit. May 27 03:39:32.261613 systemd-logind[1551]: Removed session 102. May 27 03:39:33.020397 kubelet[2917]: E0527 03:39:33.020342 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:39:37.414350 systemd[1]: Started sshd@105-157.180.65.55:22-139.178.89.65:50358.service - OpenSSH per-connection server daemon (139.178.89.65:50358). May 27 03:39:38.444144 sshd[8657]: Accepted publickey for core from 139.178.89.65 port 50358 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:39:38.446473 sshd-session[8657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:39:38.453512 systemd-logind[1551]: New session 103 of user core. May 27 03:39:38.462610 systemd[1]: Started session-103.scope - Session 103 of User core. May 27 03:39:40.013321 sshd[8659]: Connection closed by 139.178.89.65 port 50358 May 27 03:39:40.015667 sshd-session[8657]: pam_unix(sshd:session): session closed for user core May 27 03:39:40.025669 systemd[1]: sshd@105-157.180.65.55:22-139.178.89.65:50358.service: Deactivated successfully. May 27 03:39:40.028858 systemd[1]: session-103.scope: Deactivated successfully. May 27 03:39:40.032558 systemd-logind[1551]: Session 103 logged out. Waiting for processes to exit. May 27 03:39:40.034860 systemd-logind[1551]: Removed session 103. May 27 03:39:41.026658 kubelet[2917]: E0527 03:39:41.026585 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:39:44.027406 kubelet[2917]: E0527 03:39:44.027326 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:39:45.191763 systemd[1]: Started sshd@106-157.180.65.55:22-139.178.89.65:52586.service - OpenSSH per-connection server daemon (139.178.89.65:52586). May 27 03:39:46.219112 sshd[8671]: Accepted publickey for core from 139.178.89.65 port 52586 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:39:46.221798 sshd-session[8671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:39:46.228018 systemd-logind[1551]: New session 104 of user core. May 27 03:39:46.234584 systemd[1]: Started session-104.scope - Session 104 of User core. May 27 03:39:47.352064 sshd[8673]: Connection closed by 139.178.89.65 port 52586 May 27 03:39:47.354575 sshd-session[8671]: pam_unix(sshd:session): session closed for user core May 27 03:39:47.360983 systemd[1]: sshd@106-157.180.65.55:22-139.178.89.65:52586.service: Deactivated successfully. May 27 03:39:47.363651 systemd[1]: session-104.scope: Deactivated successfully. May 27 03:39:47.367165 systemd-logind[1551]: Session 104 logged out. Waiting for processes to exit. May 27 03:39:47.369560 systemd-logind[1551]: Removed session 104. May 27 03:39:52.001884 containerd[1560]: time="2025-05-27T03:39:52.001824293Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"77d8cf9a7da53e7067619acbd1beeb86fbf6e4dab1d08df200e09093f6425c25\" pid:8696 exited_at:{seconds:1748317192 nanos:1158374}" May 27 03:39:52.524349 systemd[1]: Started sshd@107-157.180.65.55:22-139.178.89.65:52598.service - OpenSSH per-connection server daemon (139.178.89.65:52598). May 27 03:39:53.571012 sshd[8709]: Accepted publickey for core from 139.178.89.65 port 52598 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:39:53.573421 sshd-session[8709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:39:53.581350 systemd-logind[1551]: New session 105 of user core. May 27 03:39:53.588613 systemd[1]: Started session-105.scope - Session 105 of User core. May 27 03:39:55.008820 sshd[8711]: Connection closed by 139.178.89.65 port 52598 May 27 03:39:55.009549 sshd-session[8709]: pam_unix(sshd:session): session closed for user core May 27 03:39:55.016508 systemd-logind[1551]: Session 105 logged out. Waiting for processes to exit. May 27 03:39:55.017421 systemd[1]: sshd@107-157.180.65.55:22-139.178.89.65:52598.service: Deactivated successfully. May 27 03:39:55.021605 systemd[1]: session-105.scope: Deactivated successfully. May 27 03:39:55.023600 kubelet[2917]: E0527 03:39:55.021614 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:39:55.025041 systemd-logind[1551]: Removed session 105. May 27 03:39:55.439520 containerd[1560]: time="2025-05-27T03:39:55.439478649Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"4e127439c2393e6f5cd8d03a70fcd10d4cbf18debce30310596d4a711546dbaf\" pid:8736 exited_at:{seconds:1748317195 nanos:438954626}" May 27 03:39:56.021902 kubelet[2917]: E0527 03:39:56.021484 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:39:57.341781 containerd[1560]: time="2025-05-27T03:39:57.341679689Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"a49c2d67cc87529f9e084a1dd14b3625b89450051307d76b0bb075b40464343f\" pid:8757 exited_at:{seconds:1748317197 nanos:341260603}" May 27 03:40:00.187961 systemd[1]: Started sshd@108-157.180.65.55:22-139.178.89.65:51194.service - OpenSSH per-connection server daemon (139.178.89.65:51194). May 27 03:40:01.209621 sshd[8767]: Accepted publickey for core from 139.178.89.65 port 51194 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:40:01.213763 sshd-session[8767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:40:01.223418 systemd-logind[1551]: New session 106 of user core. May 27 03:40:01.228620 systemd[1]: Started session-106.scope - Session 106 of User core. May 27 03:40:02.241974 sshd[8771]: Connection closed by 139.178.89.65 port 51194 May 27 03:40:02.242721 sshd-session[8767]: pam_unix(sshd:session): session closed for user core May 27 03:40:02.252165 systemd[1]: sshd@108-157.180.65.55:22-139.178.89.65:51194.service: Deactivated successfully. May 27 03:40:02.255624 systemd[1]: session-106.scope: Deactivated successfully. May 27 03:40:02.256771 systemd-logind[1551]: Session 106 logged out. Waiting for processes to exit. May 27 03:40:02.261075 systemd-logind[1551]: Removed session 106. May 27 03:40:07.416576 systemd[1]: Started sshd@109-157.180.65.55:22-139.178.89.65:38282.service - OpenSSH per-connection server daemon (139.178.89.65:38282). May 27 03:40:08.445035 sshd[8782]: Accepted publickey for core from 139.178.89.65 port 38282 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:40:08.446198 sshd-session[8782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:40:08.451723 systemd-logind[1551]: New session 107 of user core. May 27 03:40:08.458449 systemd[1]: Started session-107.scope - Session 107 of User core. May 27 03:40:09.412722 sshd[8784]: Connection closed by 139.178.89.65 port 38282 May 27 03:40:09.413440 sshd-session[8782]: pam_unix(sshd:session): session closed for user core May 27 03:40:09.419083 systemd[1]: sshd@109-157.180.65.55:22-139.178.89.65:38282.service: Deactivated successfully. May 27 03:40:09.419140 systemd-logind[1551]: Session 107 logged out. Waiting for processes to exit. May 27 03:40:09.422962 systemd[1]: session-107.scope: Deactivated successfully. May 27 03:40:09.425409 systemd-logind[1551]: Removed session 107. May 27 03:40:10.022482 kubelet[2917]: E0527 03:40:10.021953 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:40:11.030651 kubelet[2917]: E0527 03:40:11.030413 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:40:14.588734 systemd[1]: Started sshd@110-157.180.65.55:22-139.178.89.65:42982.service - OpenSSH per-connection server daemon (139.178.89.65:42982). May 27 03:40:15.633710 sshd[8796]: Accepted publickey for core from 139.178.89.65 port 42982 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:40:15.634948 sshd-session[8796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:40:15.641407 systemd-logind[1551]: New session 108 of user core. May 27 03:40:15.648876 systemd[1]: Started session-108.scope - Session 108 of User core. May 27 03:40:16.582854 sshd[8798]: Connection closed by 139.178.89.65 port 42982 May 27 03:40:16.583617 sshd-session[8796]: pam_unix(sshd:session): session closed for user core May 27 03:40:16.587870 systemd[1]: sshd@110-157.180.65.55:22-139.178.89.65:42982.service: Deactivated successfully. May 27 03:40:16.590666 systemd[1]: session-108.scope: Deactivated successfully. May 27 03:40:16.594196 systemd-logind[1551]: Session 108 logged out. Waiting for processes to exit. May 27 03:40:16.595719 systemd-logind[1551]: Removed session 108. May 27 03:40:21.770740 systemd[1]: Started sshd@111-157.180.65.55:22-139.178.89.65:42996.service - OpenSSH per-connection server daemon (139.178.89.65:42996). May 27 03:40:22.087178 containerd[1560]: time="2025-05-27T03:40:22.087019123Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"7c20031faa746810533a89621f271575fee848ea76d947c841a9b5a3ccd583e8\" pid:8823 exited_at:{seconds:1748317222 nanos:86562246}" May 27 03:40:22.886388 sshd[8810]: Accepted publickey for core from 139.178.89.65 port 42996 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:40:22.890716 sshd-session[8810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:40:22.902442 systemd-logind[1551]: New session 109 of user core. May 27 03:40:22.909691 systemd[1]: Started session-109.scope - Session 109 of User core. May 27 03:40:23.021730 kubelet[2917]: E0527 03:40:23.021649 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:40:24.035468 kubelet[2917]: E0527 03:40:24.034881 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:40:25.145350 sshd[8838]: Connection closed by 139.178.89.65 port 42996 May 27 03:40:25.148559 sshd-session[8810]: pam_unix(sshd:session): session closed for user core May 27 03:40:25.157188 systemd-logind[1551]: Session 109 logged out. Waiting for processes to exit. May 27 03:40:25.167842 systemd[1]: sshd@111-157.180.65.55:22-139.178.89.65:42996.service: Deactivated successfully. May 27 03:40:25.171599 systemd[1]: session-109.scope: Deactivated successfully. May 27 03:40:25.172551 systemd[1]: session-109.scope: Consumed 1.110s CPU time, 54.8M memory peak. May 27 03:40:25.177473 systemd-logind[1551]: Removed session 109. May 27 03:40:27.478376 containerd[1560]: time="2025-05-27T03:40:27.478252341Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"2069a91e37435bb020ccbb8e1dca38fecc510e60b05a1a3684d567983db27ef8\" pid:8861 exited_at:{seconds:1748317227 nanos:424603336}" May 27 03:40:30.320800 systemd[1]: Started sshd@112-157.180.65.55:22-139.178.89.65:33516.service - OpenSSH per-connection server daemon (139.178.89.65:33516). May 27 03:40:31.373156 sshd[8872]: Accepted publickey for core from 139.178.89.65 port 33516 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:40:31.375363 sshd-session[8872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:40:31.382117 systemd-logind[1551]: New session 110 of user core. May 27 03:40:31.387506 systemd[1]: Started session-110.scope - Session 110 of User core. May 27 03:40:32.620844 sshd[8876]: Connection closed by 139.178.89.65 port 33516 May 27 03:40:32.621541 sshd-session[8872]: pam_unix(sshd:session): session closed for user core May 27 03:40:32.627762 systemd-logind[1551]: Session 110 logged out. Waiting for processes to exit. May 27 03:40:32.628271 systemd[1]: sshd@112-157.180.65.55:22-139.178.89.65:33516.service: Deactivated successfully. May 27 03:40:32.630668 systemd[1]: session-110.scope: Deactivated successfully. May 27 03:40:32.634077 systemd-logind[1551]: Removed session 110. May 27 03:40:35.149439 containerd[1560]: time="2025-05-27T03:40:35.149351178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:40:35.874867 containerd[1560]: time="2025-05-27T03:40:35.874800185Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:40:35.876064 containerd[1560]: time="2025-05-27T03:40:35.876029310Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:40:35.876237 containerd[1560]: time="2025-05-27T03:40:35.876191004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:40:35.928045 kubelet[2917]: E0527 03:40:35.916762 2917 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:40:35.939410 kubelet[2917]: E0527 03:40:35.930904 2917 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:40:35.975390 kubelet[2917]: E0527 03:40:35.975251 2917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:bee51492bca3428982f094867f4c4710,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6jzjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-555bcbc6ff-596vx_calico-system(20923581-35ae-477b-83e9-35d75acd3c66): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:40:35.978196 containerd[1560]: time="2025-05-27T03:40:35.978130337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:40:36.296880 containerd[1560]: time="2025-05-27T03:40:36.296802621Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:40:36.298144 containerd[1560]: time="2025-05-27T03:40:36.298067453Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:40:36.298833 containerd[1560]: time="2025-05-27T03:40:36.298182789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:40:36.298911 kubelet[2917]: E0527 03:40:36.298424 2917 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:40:36.298911 kubelet[2917]: E0527 03:40:36.298483 2917 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:40:36.298911 kubelet[2917]: E0527 03:40:36.298635 2917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jzjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-555bcbc6ff-596vx_calico-system(20923581-35ae-477b-83e9-35d75acd3c66): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:40:36.300240 kubelet[2917]: E0527 03:40:36.300165 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:40:37.798113 systemd[1]: Started sshd@113-157.180.65.55:22-139.178.89.65:53710.service - OpenSSH per-connection server daemon (139.178.89.65:53710). May 27 03:40:38.025915 kubelet[2917]: E0527 03:40:38.025492 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:40:38.885911 sshd[8909]: Accepted publickey for core from 139.178.89.65 port 53710 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:40:38.889570 sshd-session[8909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:40:38.900227 systemd-logind[1551]: New session 111 of user core. May 27 03:40:38.908629 systemd[1]: Started session-111.scope - Session 111 of User core. May 27 03:40:40.470540 sshd[8911]: Connection closed by 139.178.89.65 port 53710 May 27 03:40:40.471023 sshd-session[8909]: pam_unix(sshd:session): session closed for user core May 27 03:40:40.482972 systemd-logind[1551]: Session 111 logged out. Waiting for processes to exit. May 27 03:40:40.484259 systemd[1]: sshd@113-157.180.65.55:22-139.178.89.65:53710.service: Deactivated successfully. May 27 03:40:40.488286 systemd[1]: session-111.scope: Deactivated successfully. May 27 03:40:40.491642 systemd-logind[1551]: Removed session 111. May 27 03:40:45.649477 systemd[1]: Started sshd@114-157.180.65.55:22-139.178.89.65:34348.service - OpenSSH per-connection server daemon (139.178.89.65:34348). May 27 03:40:46.680968 sshd[8923]: Accepted publickey for core from 139.178.89.65 port 34348 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:40:46.684010 sshd-session[8923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:40:46.693064 systemd-logind[1551]: New session 112 of user core. May 27 03:40:46.697519 systemd[1]: Started session-112.scope - Session 112 of User core. May 27 03:40:47.734565 sshd[8925]: Connection closed by 139.178.89.65 port 34348 May 27 03:40:47.735541 sshd-session[8923]: pam_unix(sshd:session): session closed for user core May 27 03:40:47.739848 systemd-logind[1551]: Session 112 logged out. Waiting for processes to exit. May 27 03:40:47.741743 systemd[1]: sshd@114-157.180.65.55:22-139.178.89.65:34348.service: Deactivated successfully. May 27 03:40:47.744059 systemd[1]: session-112.scope: Deactivated successfully. May 27 03:40:47.746416 systemd-logind[1551]: Removed session 112. May 27 03:40:50.022469 containerd[1560]: time="2025-05-27T03:40:50.022260693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:40:50.506886 containerd[1560]: time="2025-05-27T03:40:50.506771012Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:40:50.508759 containerd[1560]: time="2025-05-27T03:40:50.508658532Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:40:50.508963 containerd[1560]: time="2025-05-27T03:40:50.508807901Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:40:50.509118 kubelet[2917]: E0527 03:40:50.509047 2917 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:40:50.510468 kubelet[2917]: E0527 03:40:50.509145 2917 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:40:50.510468 kubelet[2917]: E0527 03:40:50.510102 2917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f7drb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-xwqrr_calico-system(9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:40:50.511537 kubelet[2917]: E0527 03:40:50.511444 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:40:51.021977 kubelet[2917]: E0527 03:40:51.021900 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:40:52.592810 containerd[1560]: time="2025-05-27T03:40:52.592736578Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"4e8e5b847a64724ac5b7b6464c898eba47157069595af588dd2f13d664746bb2\" pid:8948 exited_at:{seconds:1748317252 nanos:447088022}" May 27 03:40:52.922270 systemd[1]: Started sshd@115-157.180.65.55:22-139.178.89.65:34352.service - OpenSSH per-connection server daemon (139.178.89.65:34352). May 27 03:40:53.972080 sshd[8962]: Accepted publickey for core from 139.178.89.65 port 34352 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:40:53.974159 sshd-session[8962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:40:53.981776 systemd-logind[1551]: New session 113 of user core. May 27 03:40:53.992066 systemd[1]: Started session-113.scope - Session 113 of User core. May 27 03:40:55.314979 sshd[8964]: Connection closed by 139.178.89.65 port 34352 May 27 03:40:55.324559 sshd-session[8962]: pam_unix(sshd:session): session closed for user core May 27 03:40:55.339897 systemd[1]: sshd@115-157.180.65.55:22-139.178.89.65:34352.service: Deactivated successfully. May 27 03:40:55.342045 systemd[1]: session-113.scope: Deactivated successfully. May 27 03:40:55.345269 systemd-logind[1551]: Session 113 logged out. Waiting for processes to exit. May 27 03:40:55.350116 systemd-logind[1551]: Removed session 113. May 27 03:40:55.458775 containerd[1560]: time="2025-05-27T03:40:55.458739791Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"7080e2ba520ef42e1f3bb5d876f43e0d6120a2c87750d706975b381c23460306\" pid:8989 exited_at:{seconds:1748317255 nanos:458491626}" May 27 03:40:57.298163 containerd[1560]: time="2025-05-27T03:40:57.298085817Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"80801cbc5e97a4b728714c665018623903194e65db55ee20ce33234e5f018b1d\" pid:9011 exited_at:{seconds:1748317257 nanos:297533813}" May 27 03:41:00.491969 systemd[1]: Started sshd@116-157.180.65.55:22-139.178.89.65:59928.service - OpenSSH per-connection server daemon (139.178.89.65:59928). May 27 03:41:01.541607 sshd[9021]: Accepted publickey for core from 139.178.89.65 port 59928 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:41:01.545773 sshd-session[9021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:41:01.554759 systemd-logind[1551]: New session 114 of user core. May 27 03:41:01.563569 systemd[1]: Started session-114.scope - Session 114 of User core. May 27 03:41:03.400713 sshd[9025]: Connection closed by 139.178.89.65 port 59928 May 27 03:41:03.402242 sshd-session[9021]: pam_unix(sshd:session): session closed for user core May 27 03:41:03.407059 systemd-logind[1551]: Session 114 logged out. Waiting for processes to exit. May 27 03:41:03.407775 systemd[1]: sshd@116-157.180.65.55:22-139.178.89.65:59928.service: Deactivated successfully. May 27 03:41:03.409868 systemd[1]: session-114.scope: Deactivated successfully. May 27 03:41:03.411464 systemd-logind[1551]: Removed session 114. May 27 03:41:04.060051 kubelet[2917]: E0527 03:41:04.059953 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:41:05.024774 kubelet[2917]: E0527 03:41:05.024679 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:41:08.574825 systemd[1]: Started sshd@117-157.180.65.55:22-139.178.89.65:50490.service - OpenSSH per-connection server daemon (139.178.89.65:50490). May 27 03:41:09.625210 sshd[9036]: Accepted publickey for core from 139.178.89.65 port 50490 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:41:09.628991 sshd-session[9036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:41:09.635705 systemd-logind[1551]: New session 115 of user core. May 27 03:41:09.640576 systemd[1]: Started session-115.scope - Session 115 of User core. May 27 03:41:10.791919 sshd[9038]: Connection closed by 139.178.89.65 port 50490 May 27 03:41:10.794977 sshd-session[9036]: pam_unix(sshd:session): session closed for user core May 27 03:41:10.799373 systemd[1]: sshd@117-157.180.65.55:22-139.178.89.65:50490.service: Deactivated successfully. May 27 03:41:10.801769 systemd[1]: session-115.scope: Deactivated successfully. May 27 03:41:10.803431 systemd-logind[1551]: Session 115 logged out. Waiting for processes to exit. May 27 03:41:10.805161 systemd-logind[1551]: Removed session 115. May 27 03:41:15.961523 systemd[1]: Started sshd@118-157.180.65.55:22-139.178.89.65:43840.service - OpenSSH per-connection server daemon (139.178.89.65:43840). May 27 03:41:16.023413 kubelet[2917]: E0527 03:41:16.023216 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:41:16.958585 sshd[9050]: Accepted publickey for core from 139.178.89.65 port 43840 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:41:16.961390 sshd-session[9050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:41:16.969134 systemd-logind[1551]: New session 116 of user core. May 27 03:41:16.975643 systemd[1]: Started session-116.scope - Session 116 of User core. May 27 03:41:17.881651 sshd[9052]: Connection closed by 139.178.89.65 port 43840 May 27 03:41:17.884995 sshd-session[9050]: pam_unix(sshd:session): session closed for user core May 27 03:41:17.894787 systemd[1]: sshd@118-157.180.65.55:22-139.178.89.65:43840.service: Deactivated successfully. May 27 03:41:17.897980 systemd[1]: session-116.scope: Deactivated successfully. May 27 03:41:17.899930 systemd-logind[1551]: Session 116 logged out. Waiting for processes to exit. May 27 03:41:17.902186 systemd-logind[1551]: Removed session 116. May 27 03:41:18.023209 kubelet[2917]: E0527 03:41:18.022436 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:41:22.107512 containerd[1560]: time="2025-05-27T03:41:22.099980891Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"733af4e430dfc63db436659cfa8433bcf9f18d831829a636b97cf3f7fe30df64\" pid:9081 exited_at:{seconds:1748317282 nanos:41617364}" May 27 03:41:23.053491 systemd[1]: Started sshd@119-157.180.65.55:22-139.178.89.65:43854.service - OpenSSH per-connection server daemon (139.178.89.65:43854). May 27 03:41:24.147463 sshd[9093]: Accepted publickey for core from 139.178.89.65 port 43854 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:41:24.150852 sshd-session[9093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:41:24.160213 systemd-logind[1551]: New session 117 of user core. May 27 03:41:24.165582 systemd[1]: Started session-117.scope - Session 117 of User core. May 27 03:41:25.749737 sshd[9097]: Connection closed by 139.178.89.65 port 43854 May 27 03:41:25.750594 sshd-session[9093]: pam_unix(sshd:session): session closed for user core May 27 03:41:25.756080 systemd[1]: sshd@119-157.180.65.55:22-139.178.89.65:43854.service: Deactivated successfully. May 27 03:41:25.756120 systemd-logind[1551]: Session 117 logged out. Waiting for processes to exit. May 27 03:41:25.758938 systemd[1]: session-117.scope: Deactivated successfully. May 27 03:41:25.762044 systemd-logind[1551]: Removed session 117. May 27 03:41:27.302626 containerd[1560]: time="2025-05-27T03:41:27.302552684Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"d642eacf4d163b330758d52da2d118a57cac3adb8aa5e13b47aa3cf9429293bf\" pid:9119 exited_at:{seconds:1748317287 nanos:302193831}" May 27 03:41:30.926613 systemd[1]: Started sshd@120-157.180.65.55:22-139.178.89.65:37760.service - OpenSSH per-connection server daemon (139.178.89.65:37760). May 27 03:41:31.022051 kubelet[2917]: E0527 03:41:31.021795 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:41:31.023445 kubelet[2917]: E0527 03:41:31.022151 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:41:31.951580 sshd[9129]: Accepted publickey for core from 139.178.89.65 port 37760 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:41:31.961182 sshd-session[9129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:41:31.972394 systemd-logind[1551]: New session 118 of user core. May 27 03:41:31.978541 systemd[1]: Started session-118.scope - Session 118 of User core. May 27 03:41:33.676448 sshd[9133]: Connection closed by 139.178.89.65 port 37760 May 27 03:41:33.681754 sshd-session[9129]: pam_unix(sshd:session): session closed for user core May 27 03:41:33.695778 systemd-logind[1551]: Session 118 logged out. Waiting for processes to exit. May 27 03:41:33.696646 systemd[1]: sshd@120-157.180.65.55:22-139.178.89.65:37760.service: Deactivated successfully. May 27 03:41:33.699976 systemd[1]: session-118.scope: Deactivated successfully. May 27 03:41:33.704499 systemd-logind[1551]: Removed session 118. May 27 03:41:38.851505 systemd[1]: Started sshd@121-157.180.65.55:22-139.178.89.65:49446.service - OpenSSH per-connection server daemon (139.178.89.65:49446). May 27 03:41:39.914012 sshd[9145]: Accepted publickey for core from 139.178.89.65 port 49446 ssh2: RSA SHA256:TYEcoyDNyh3JcDi4HtBO5FTvj/3v8ZbDC5jXQ/LuvgI May 27 03:41:39.916188 sshd-session[9145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:41:39.925027 systemd-logind[1551]: New session 119 of user core. May 27 03:41:39.930468 systemd[1]: Started session-119.scope - Session 119 of User core. May 27 03:41:40.812433 sshd[9147]: Connection closed by 139.178.89.65 port 49446 May 27 03:41:40.821799 sshd-session[9145]: pam_unix(sshd:session): session closed for user core May 27 03:41:40.830979 systemd[1]: sshd@121-157.180.65.55:22-139.178.89.65:49446.service: Deactivated successfully. May 27 03:41:40.834088 systemd[1]: session-119.scope: Deactivated successfully. May 27 03:41:40.837191 systemd-logind[1551]: Session 119 logged out. Waiting for processes to exit. May 27 03:41:40.838929 systemd-logind[1551]: Removed session 119. May 27 03:41:46.021776 kubelet[2917]: E0527 03:41:46.021599 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:41:46.023842 kubelet[2917]: E0527 03:41:46.022505 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:41:52.088149 containerd[1560]: time="2025-05-27T03:41:52.087995890Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca116aa036eb736891f311278a152a7e5a51b4c41f55c788135f885583d90f6b\" id:\"8a655da1ab15e205f9c13795075cea93270c1c2e2f6f0e6ad0e0609d7c489e38\" pid:9171 exited_at:{seconds:1748317312 nanos:87577966}" May 27 03:41:55.387989 systemd[1]: cri-containerd-ee6cfaeb5878b0d01ca44d402aa7ee5bb813a128765c37e0540c88fc7087f270.scope: Deactivated successfully. May 27 03:41:55.391799 systemd[1]: cri-containerd-ee6cfaeb5878b0d01ca44d402aa7ee5bb813a128765c37e0540c88fc7087f270.scope: Consumed 39.286s CPU time, 127.4M memory peak, 110M read from disk. May 27 03:41:55.462790 containerd[1560]: time="2025-05-27T03:41:55.462754716Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"9445238e6daf408de4a98b06a716e6046b29965fb44cc25f8f465ea1d3535d45\" pid:9195 exit_status:1 exited_at:{seconds:1748317315 nanos:461957130}" May 27 03:41:55.516422 containerd[1560]: time="2025-05-27T03:41:55.516360536Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee6cfaeb5878b0d01ca44d402aa7ee5bb813a128765c37e0540c88fc7087f270\" id:\"ee6cfaeb5878b0d01ca44d402aa7ee5bb813a128765c37e0540c88fc7087f270\" pid:3305 exit_status:1 exited_at:{seconds:1748317315 nanos:515872330}" May 27 03:41:55.529382 containerd[1560]: time="2025-05-27T03:41:55.529294023Z" level=info msg="received exit event container_id:\"ee6cfaeb5878b0d01ca44d402aa7ee5bb813a128765c37e0540c88fc7087f270\" id:\"ee6cfaeb5878b0d01ca44d402aa7ee5bb813a128765c37e0540c88fc7087f270\" pid:3305 exit_status:1 exited_at:{seconds:1748317315 nanos:515872330}" May 27 03:41:55.681351 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee6cfaeb5878b0d01ca44d402aa7ee5bb813a128765c37e0540c88fc7087f270-rootfs.mount: Deactivated successfully. May 27 03:41:55.866448 kubelet[2917]: E0527 03:41:55.866378 2917 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:58242->10.0.0.2:2379: read: connection timed out" May 27 03:41:56.474501 systemd[1]: cri-containerd-33d7a427bc912c1d8108d17f503f8a08ba85d7bf96901a9d7952ed252cd738f6.scope: Deactivated successfully. May 27 03:41:56.474910 systemd[1]: cri-containerd-33d7a427bc912c1d8108d17f503f8a08ba85d7bf96901a9d7952ed252cd738f6.scope: Consumed 12.937s CPU time, 81M memory peak, 161.3M read from disk. May 27 03:41:56.564179 containerd[1560]: time="2025-05-27T03:41:56.564124613Z" level=info msg="received exit event container_id:\"33d7a427bc912c1d8108d17f503f8a08ba85d7bf96901a9d7952ed252cd738f6\" id:\"33d7a427bc912c1d8108d17f503f8a08ba85d7bf96901a9d7952ed252cd738f6\" pid:2775 exit_status:1 exited_at:{seconds:1748317316 nanos:525361311}" May 27 03:41:56.566454 containerd[1560]: time="2025-05-27T03:41:56.566419517Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33d7a427bc912c1d8108d17f503f8a08ba85d7bf96901a9d7952ed252cd738f6\" id:\"33d7a427bc912c1d8108d17f503f8a08ba85d7bf96901a9d7952ed252cd738f6\" pid:2775 exit_status:1 exited_at:{seconds:1748317316 nanos:525361311}" May 27 03:41:56.603664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33d7a427bc912c1d8108d17f503f8a08ba85d7bf96901a9d7952ed252cd738f6-rootfs.mount: Deactivated successfully. May 27 03:41:56.716515 kubelet[2917]: I0527 03:41:56.716448 2917 scope.go:117] "RemoveContainer" containerID="03409ea003ada58d3a703b36bc2fc64ad8aa306c2b26978c26c58a234a807518" May 27 03:41:56.724898 kubelet[2917]: I0527 03:41:56.724590 2917 scope.go:117] "RemoveContainer" containerID="ee6cfaeb5878b0d01ca44d402aa7ee5bb813a128765c37e0540c88fc7087f270" May 27 03:41:56.917582 containerd[1560]: time="2025-05-27T03:41:56.916393450Z" level=info msg="CreateContainer within sandbox \"8de8d631426da3d149cb076c0e0725a41c3018fcd8af9972f84377638af2c79e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:2,}" May 27 03:41:56.993711 containerd[1560]: time="2025-05-27T03:41:56.993149169Z" level=info msg="RemoveContainer for \"03409ea003ada58d3a703b36bc2fc64ad8aa306c2b26978c26c58a234a807518\"" May 27 03:41:57.066961 containerd[1560]: time="2025-05-27T03:41:57.066913718Z" level=info msg="RemoveContainer for \"03409ea003ada58d3a703b36bc2fc64ad8aa306c2b26978c26c58a234a807518\" returns successfully" May 27 03:41:57.082926 containerd[1560]: time="2025-05-27T03:41:57.081962503Z" level=info msg="Container beae9a3d54bd2d9e6a95acc81a9e7ec8e245e6890bb3cb314cde602794b03b87: CDI devices from CRI Config.CDIDevices: []" May 27 03:41:57.093150 containerd[1560]: time="2025-05-27T03:41:57.093121171Z" level=info msg="CreateContainer within sandbox \"8de8d631426da3d149cb076c0e0725a41c3018fcd8af9972f84377638af2c79e\" for &ContainerMetadata{Name:tigera-operator,Attempt:2,} returns container id \"beae9a3d54bd2d9e6a95acc81a9e7ec8e245e6890bb3cb314cde602794b03b87\"" May 27 03:41:57.103222 containerd[1560]: time="2025-05-27T03:41:57.103183324Z" level=info msg="StartContainer for \"beae9a3d54bd2d9e6a95acc81a9e7ec8e245e6890bb3cb314cde602794b03b87\"" May 27 03:41:57.104335 containerd[1560]: time="2025-05-27T03:41:57.104293846Z" level=info msg="connecting to shim beae9a3d54bd2d9e6a95acc81a9e7ec8e245e6890bb3cb314cde602794b03b87" address="unix:///run/containerd/s/74e43b4eccaf0a15d34e89e42ffd69f65ad8d465a8e94cec3172182594799752" protocol=ttrpc version=3 May 27 03:41:57.158750 systemd[1]: Started cri-containerd-beae9a3d54bd2d9e6a95acc81a9e7ec8e245e6890bb3cb314cde602794b03b87.scope - libcontainer container beae9a3d54bd2d9e6a95acc81a9e7ec8e245e6890bb3cb314cde602794b03b87. May 27 03:41:57.212336 containerd[1560]: time="2025-05-27T03:41:57.212269331Z" level=info msg="StartContainer for \"beae9a3d54bd2d9e6a95acc81a9e7ec8e245e6890bb3cb314cde602794b03b87\" returns successfully" May 27 03:41:57.287744 containerd[1560]: time="2025-05-27T03:41:57.287143210Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6987d0fe81036d50ab8ef5c9659e8d4befdda41d53233178124e5de21b1dc1b4\" id:\"c594894246ba0be1ac638b486bf876f4c3af11ee8744f88db2ef8ac7a3ce028d\" pid:9269 exit_status:1 exited_at:{seconds:1748317317 nanos:286443879}" May 27 03:41:57.694540 kubelet[2917]: I0527 03:41:57.694102 2917 scope.go:117] "RemoveContainer" containerID="33d7a427bc912c1d8108d17f503f8a08ba85d7bf96901a9d7952ed252cd738f6" May 27 03:41:57.696745 containerd[1560]: time="2025-05-27T03:41:57.696663188Z" level=info msg="CreateContainer within sandbox \"a3e3014fadbab106ddb47608e8bfd87d94c6c669c6c718507f314f1e3fb803fa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 27 03:41:57.724827 containerd[1560]: time="2025-05-27T03:41:57.724744376Z" level=info msg="Container 2d85c02d18be4bb325c60c65e4e0d5f7a491494267f718bc0a4359d5dbd038b8: CDI devices from CRI Config.CDIDevices: []" May 27 03:41:57.725030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1042146807.mount: Deactivated successfully. May 27 03:41:57.736315 containerd[1560]: time="2025-05-27T03:41:57.736208087Z" level=info msg="CreateContainer within sandbox \"a3e3014fadbab106ddb47608e8bfd87d94c6c669c6c718507f314f1e3fb803fa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2d85c02d18be4bb325c60c65e4e0d5f7a491494267f718bc0a4359d5dbd038b8\"" May 27 03:41:57.738182 containerd[1560]: time="2025-05-27T03:41:57.737276060Z" level=info msg="StartContainer for \"2d85c02d18be4bb325c60c65e4e0d5f7a491494267f718bc0a4359d5dbd038b8\"" May 27 03:41:57.740923 containerd[1560]: time="2025-05-27T03:41:57.740816139Z" level=info msg="connecting to shim 2d85c02d18be4bb325c60c65e4e0d5f7a491494267f718bc0a4359d5dbd038b8" address="unix:///run/containerd/s/f38018bb9e1c00c95dd49bfe4c5b31895ee6122aa316af913a54f424357c939f" protocol=ttrpc version=3 May 27 03:41:57.775625 systemd[1]: Started cri-containerd-2d85c02d18be4bb325c60c65e4e0d5f7a491494267f718bc0a4359d5dbd038b8.scope - libcontainer container 2d85c02d18be4bb325c60c65e4e0d5f7a491494267f718bc0a4359d5dbd038b8. May 27 03:41:57.860481 containerd[1560]: time="2025-05-27T03:41:57.860390078Z" level=info msg="StartContainer for \"2d85c02d18be4bb325c60c65e4e0d5f7a491494267f718bc0a4359d5dbd038b8\" returns successfully" May 27 03:41:58.025062 kubelet[2917]: E0527 03:41:58.024372 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-555bcbc6ff-596vx" podUID="20923581-35ae-477b-83e9-35d75acd3c66" May 27 03:42:01.020558 kubelet[2917]: E0527 03:42:01.020479 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-xwqrr" podUID="9c59cb9e-2eb7-4e20-b889-53cd0bb9e4ed" May 27 03:42:01.817028 systemd[1]: cri-containerd-389cdffc1ab0eedf86b2f748186cff29bb3b1b73855795f5c53aec09b414f52b.scope: Deactivated successfully. May 27 03:42:01.817467 systemd[1]: cri-containerd-389cdffc1ab0eedf86b2f748186cff29bb3b1b73855795f5c53aec09b414f52b.scope: Consumed 3.305s CPU time, 37M memory peak, 106.9M read from disk. May 27 03:42:01.826985 containerd[1560]: time="2025-05-27T03:42:01.826835278Z" level=info msg="TaskExit event in podsandbox handler container_id:\"389cdffc1ab0eedf86b2f748186cff29bb3b1b73855795f5c53aec09b414f52b\" id:\"389cdffc1ab0eedf86b2f748186cff29bb3b1b73855795f5c53aec09b414f52b\" pid:2749 exit_status:1 exited_at:{seconds:1748317321 nanos:823647879}" May 27 03:42:01.827630 containerd[1560]: time="2025-05-27T03:42:01.827028280Z" level=info msg="received exit event container_id:\"389cdffc1ab0eedf86b2f748186cff29bb3b1b73855795f5c53aec09b414f52b\" id:\"389cdffc1ab0eedf86b2f748186cff29bb3b1b73855795f5c53aec09b414f52b\" pid:2749 exit_status:1 exited_at:{seconds:1748317321 nanos:823647879}" May 27 03:42:01.846035 kubelet[2917]: E0527 03:42:01.837974 2917 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:58094->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4344-0-0-e-876c439243.18434554e5f1566f kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4344-0-0-e-876c439243,UID:790d5218cc4954efd5205153c6b2d4a4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4344-0-0-e-876c439243,},FirstTimestamp:2025-05-27 03:41:51.287252591 +0000 UTC m=+1077.399053259,LastTimestamp:2025-05-27 03:41:51.287252591 +0000 UTC m=+1077.399053259,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344-0-0-e-876c439243,}" May 27 03:42:01.872529 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-389cdffc1ab0eedf86b2f748186cff29bb3b1b73855795f5c53aec09b414f52b-rootfs.mount: Deactivated successfully.