May 27 17:41:13.589529 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 27 15:32:02 -00 2025 May 27 17:41:13.589560 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 17:41:13.589572 kernel: BIOS-provided physical RAM map: May 27 17:41:13.589580 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved May 27 17:41:13.589588 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable May 27 17:41:13.589596 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved May 27 17:41:13.589615 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable May 27 17:41:13.589624 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved May 27 17:41:13.589632 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd326fff] usable May 27 17:41:13.589641 kernel: BIOS-e820: [mem 0x00000000bd327000-0x00000000bd32efff] ACPI data May 27 17:41:13.589649 kernel: BIOS-e820: [mem 0x00000000bd32f000-0x00000000bf8ecfff] usable May 27 17:41:13.589657 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved May 27 17:41:13.589666 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data May 27 17:41:13.589674 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS May 27 17:41:13.589687 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable May 27 17:41:13.589696 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved May 27 17:41:13.589705 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable May 27 17:41:13.589714 kernel: NX (Execute Disable) protection: active May 27 17:41:13.589723 kernel: APIC: Static calls initialized May 27 17:41:13.589733 kernel: efi: EFI v2.7 by EDK II May 27 17:41:13.589742 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd327018 May 27 17:41:13.589752 kernel: random: crng init done May 27 17:41:13.589763 kernel: secureboot: Secure boot disabled May 27 17:41:13.589772 kernel: SMBIOS 2.4 present. May 27 17:41:13.589782 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025 May 27 17:41:13.589791 kernel: DMI: Memory slots populated: 1/1 May 27 17:41:13.589800 kernel: Hypervisor detected: KVM May 27 17:41:13.589809 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 27 17:41:13.589818 kernel: kvm-clock: using sched offset of 14653243300 cycles May 27 17:41:13.589828 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 27 17:41:13.589838 kernel: tsc: Detected 2299.998 MHz processor May 27 17:41:13.589848 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 27 17:41:13.589862 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 27 17:41:13.589871 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 May 27 17:41:13.589881 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs May 27 17:41:13.589890 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 27 17:41:13.589900 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 May 27 17:41:13.589909 kernel: Using GB pages for direct mapping May 27 17:41:13.589919 kernel: ACPI: Early table checksum verification disabled May 27 17:41:13.589929 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) May 27 17:41:13.589944 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) May 27 17:41:13.589954 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) May 27 17:41:13.589964 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) May 27 17:41:13.589974 kernel: ACPI: FACS 0x00000000BFBF2000 000040 May 27 17:41:13.589984 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) May 27 17:41:13.589994 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) May 27 17:41:13.590006 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) May 27 17:41:13.590016 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) May 27 17:41:13.590026 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) May 27 17:41:13.590037 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) May 27 17:41:13.590047 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] May 27 17:41:13.590057 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] May 27 17:41:13.590067 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] May 27 17:41:13.590077 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] May 27 17:41:13.590086 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] May 27 17:41:13.590099 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] May 27 17:41:13.590108 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] May 27 17:41:13.590118 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] May 27 17:41:13.590128 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] May 27 17:41:13.590138 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 27 17:41:13.590148 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] May 27 17:41:13.590158 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] May 27 17:41:13.590168 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] May 27 17:41:13.590179 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] May 27 17:41:13.590191 kernel: NODE_DATA(0) allocated [mem 0x21fff6dc0-0x21fffdfff] May 27 17:41:13.590201 kernel: Zone ranges: May 27 17:41:13.590210 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 27 17:41:13.590220 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 27 17:41:13.590230 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] May 27 17:41:13.590240 kernel: Device empty May 27 17:41:13.590250 kernel: Movable zone start for each node May 27 17:41:13.590260 kernel: Early memory node ranges May 27 17:41:13.590291 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] May 27 17:41:13.590310 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] May 27 17:41:13.590326 kernel: node 0: [mem 0x0000000000100000-0x00000000bd326fff] May 27 17:41:13.590341 kernel: node 0: [mem 0x00000000bd32f000-0x00000000bf8ecfff] May 27 17:41:13.590357 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] May 27 17:41:13.590374 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] May 27 17:41:13.590391 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] May 27 17:41:13.590409 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 17:41:13.590426 kernel: On node 0, zone DMA: 11 pages in unavailable ranges May 27 17:41:13.590444 kernel: On node 0, zone DMA: 104 pages in unavailable ranges May 27 17:41:13.590466 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges May 27 17:41:13.590483 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 27 17:41:13.590502 kernel: On node 0, zone Normal: 32 pages in unavailable ranges May 27 17:41:13.590518 kernel: ACPI: PM-Timer IO Port: 0xb008 May 27 17:41:13.590535 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 27 17:41:13.590551 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 27 17:41:13.590568 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 27 17:41:13.590585 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 27 17:41:13.590615 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 27 17:41:13.590636 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 27 17:41:13.590652 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 27 17:41:13.590668 kernel: CPU topo: Max. logical packages: 1 May 27 17:41:13.590684 kernel: CPU topo: Max. logical dies: 1 May 27 17:41:13.590701 kernel: CPU topo: Max. dies per package: 1 May 27 17:41:13.590718 kernel: CPU topo: Max. threads per core: 2 May 27 17:41:13.590736 kernel: CPU topo: Num. cores per package: 1 May 27 17:41:13.590753 kernel: CPU topo: Num. threads per package: 2 May 27 17:41:13.590770 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 27 17:41:13.590913 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices May 27 17:41:13.590935 kernel: Booting paravirtualized kernel on KVM May 27 17:41:13.590952 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 27 17:41:13.590970 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 27 17:41:13.590988 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 27 17:41:13.591004 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 27 17:41:13.591020 kernel: pcpu-alloc: [0] 0 1 May 27 17:41:13.591036 kernel: kvm-guest: PV spinlocks enabled May 27 17:41:13.591053 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 27 17:41:13.591072 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 17:41:13.591094 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 17:41:13.591111 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) May 27 17:41:13.591128 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 17:41:13.591146 kernel: Fallback order for Node 0: 0 May 27 17:41:13.591163 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965138 May 27 17:41:13.591180 kernel: Policy zone: Normal May 27 17:41:13.591197 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 17:41:13.591215 kernel: software IO TLB: area num 2. May 27 17:41:13.591247 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 27 17:41:13.591283 kernel: Kernel/User page tables isolation: enabled May 27 17:41:13.591307 kernel: ftrace: allocating 40081 entries in 157 pages May 27 17:41:13.591325 kernel: ftrace: allocated 157 pages with 5 groups May 27 17:41:13.591343 kernel: Dynamic Preempt: voluntary May 27 17:41:13.591361 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 17:41:13.591380 kernel: rcu: RCU event tracing is enabled. May 27 17:41:13.591398 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 27 17:41:13.591421 kernel: Trampoline variant of Tasks RCU enabled. May 27 17:41:13.591440 kernel: Rude variant of Tasks RCU enabled. May 27 17:41:13.591459 kernel: Tracing variant of Tasks RCU enabled. May 27 17:41:13.591477 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 17:41:13.591496 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 27 17:41:13.591515 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 17:41:13.591546 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 17:41:13.591566 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 17:41:13.591588 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 27 17:41:13.591614 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 17:41:13.591633 kernel: Console: colour dummy device 80x25 May 27 17:41:13.591651 kernel: printk: legacy console [ttyS0] enabled May 27 17:41:13.591669 kernel: ACPI: Core revision 20240827 May 27 17:41:13.591687 kernel: APIC: Switch to symmetric I/O mode setup May 27 17:41:13.591706 kernel: x2apic enabled May 27 17:41:13.591725 kernel: APIC: Switched APIC routing to: physical x2apic May 27 17:41:13.591743 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 May 27 17:41:13.591761 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns May 27 17:41:13.591784 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) May 27 17:41:13.591802 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 May 27 17:41:13.591821 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 May 27 17:41:13.591839 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 27 17:41:13.591857 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit May 27 17:41:13.591876 kernel: Spectre V2 : Mitigation: IBRS May 27 17:41:13.591894 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 27 17:41:13.591913 kernel: RETBleed: Mitigation: IBRS May 27 17:41:13.591935 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 27 17:41:13.591953 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl May 27 17:41:13.591971 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 27 17:41:13.591990 kernel: MDS: Mitigation: Clear CPU buffers May 27 17:41:13.592008 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 27 17:41:13.592027 kernel: ITS: Mitigation: Aligned branch/return thunks May 27 17:41:13.592046 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 27 17:41:13.592064 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 27 17:41:13.592082 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 27 17:41:13.592104 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 27 17:41:13.592122 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 27 17:41:13.592140 kernel: Freeing SMP alternatives memory: 32K May 27 17:41:13.592159 kernel: pid_max: default: 32768 minimum: 301 May 27 17:41:13.592315 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 17:41:13.592334 kernel: landlock: Up and running. May 27 17:41:13.592352 kernel: SELinux: Initializing. May 27 17:41:13.592370 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 27 17:41:13.592389 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 27 17:41:13.592412 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) May 27 17:41:13.592431 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. May 27 17:41:13.592448 kernel: signal: max sigframe size: 1776 May 27 17:41:13.592467 kernel: rcu: Hierarchical SRCU implementation. May 27 17:41:13.592486 kernel: rcu: Max phase no-delay instances is 400. May 27 17:41:13.592504 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 17:41:13.592521 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 27 17:41:13.592539 kernel: smp: Bringing up secondary CPUs ... May 27 17:41:13.592557 kernel: smpboot: x86: Booting SMP configuration: May 27 17:41:13.592578 kernel: .... node #0, CPUs: #1 May 27 17:41:13.592596 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. May 27 17:41:13.592627 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 27 17:41:13.592645 kernel: smp: Brought up 1 node, 2 CPUs May 27 17:41:13.592664 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) May 27 17:41:13.592682 kernel: Memory: 7564260K/7860552K available (14336K kernel code, 2430K rwdata, 9952K rodata, 54416K init, 2552K bss, 290712K reserved, 0K cma-reserved) May 27 17:41:13.592700 kernel: devtmpfs: initialized May 27 17:41:13.592718 kernel: x86/mm: Memory block size: 128MB May 27 17:41:13.592739 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) May 27 17:41:13.592757 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 17:41:13.592775 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 27 17:41:13.592793 kernel: pinctrl core: initialized pinctrl subsystem May 27 17:41:13.592811 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 17:41:13.592829 kernel: audit: initializing netlink subsys (disabled) May 27 17:41:13.592847 kernel: audit: type=2000 audit(1748367669.403:1): state=initialized audit_enabled=0 res=1 May 27 17:41:13.592865 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 17:41:13.592883 kernel: thermal_sys: Registered thermal governor 'user_space' May 27 17:41:13.592905 kernel: cpuidle: using governor menu May 27 17:41:13.592923 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 17:41:13.592941 kernel: dca service started, version 1.12.1 May 27 17:41:13.592960 kernel: PCI: Using configuration type 1 for base access May 27 17:41:13.592978 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 27 17:41:13.592997 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 17:41:13.593015 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 27 17:41:13.593033 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 17:41:13.593052 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 27 17:41:13.593073 kernel: ACPI: Added _OSI(Module Device) May 27 17:41:13.593091 kernel: ACPI: Added _OSI(Processor Device) May 27 17:41:13.593110 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 17:41:13.593128 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 17:41:13.593146 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded May 27 17:41:13.593163 kernel: ACPI: Interpreter enabled May 27 17:41:13.593180 kernel: ACPI: PM: (supports S0 S3 S5) May 27 17:41:13.593197 kernel: ACPI: Using IOAPIC for interrupt routing May 27 17:41:13.593215 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 27 17:41:13.593235 kernel: PCI: Ignoring E820 reservations for host bridge windows May 27 17:41:13.593252 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F May 27 17:41:13.593288 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 17:41:13.593546 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 27 17:41:13.593855 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 27 17:41:13.594043 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 27 17:41:13.594066 kernel: PCI host bridge to bus 0000:00 May 27 17:41:13.594243 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 27 17:41:13.594453 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 27 17:41:13.594639 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 27 17:41:13.594812 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] May 27 17:41:13.594982 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 17:41:13.595190 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint May 27 17:41:13.595422 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint May 27 17:41:13.595631 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint May 27 17:41:13.595820 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI May 27 17:41:13.596025 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint May 27 17:41:13.596221 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] May 27 17:41:13.599008 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] May 27 17:41:13.599215 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 27 17:41:13.599432 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] May 27 17:41:13.599619 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] May 27 17:41:13.599810 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 27 17:41:13.599988 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] May 27 17:41:13.600165 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] May 27 17:41:13.600187 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 27 17:41:13.600205 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 27 17:41:13.600227 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 27 17:41:13.600245 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 27 17:41:13.600263 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 27 17:41:13.600295 kernel: iommu: Default domain type: Translated May 27 17:41:13.600312 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 27 17:41:13.600330 kernel: efivars: Registered efivars operations May 27 17:41:13.600348 kernel: PCI: Using ACPI for IRQ routing May 27 17:41:13.600367 kernel: PCI: pci_cache_line_size set to 64 bytes May 27 17:41:13.600384 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] May 27 17:41:13.600405 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] May 27 17:41:13.600422 kernel: e820: reserve RAM buffer [mem 0xbd327000-0xbfffffff] May 27 17:41:13.600439 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] May 27 17:41:13.600457 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] May 27 17:41:13.600473 kernel: vgaarb: loaded May 27 17:41:13.600491 kernel: clocksource: Switched to clocksource kvm-clock May 27 17:41:13.600508 kernel: VFS: Disk quotas dquot_6.6.0 May 27 17:41:13.600526 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 17:41:13.600544 kernel: pnp: PnP ACPI init May 27 17:41:13.600565 kernel: pnp: PnP ACPI: found 7 devices May 27 17:41:13.600582 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 27 17:41:13.600600 kernel: NET: Registered PF_INET protocol family May 27 17:41:13.600624 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 27 17:41:13.600642 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) May 27 17:41:13.600660 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 17:41:13.600677 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 17:41:13.600695 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 27 17:41:13.600712 kernel: TCP: Hash tables configured (established 65536 bind 65536) May 27 17:41:13.600733 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) May 27 17:41:13.600751 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) May 27 17:41:13.600768 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 17:41:13.600786 kernel: NET: Registered PF_XDP protocol family May 27 17:41:13.600951 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 27 17:41:13.601114 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 27 17:41:13.601284 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 27 17:41:13.601445 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] May 27 17:41:13.601633 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 27 17:41:13.601656 kernel: PCI: CLS 0 bytes, default 64 May 27 17:41:13.601675 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 27 17:41:13.601693 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) May 27 17:41:13.601711 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 27 17:41:13.601729 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns May 27 17:41:13.601747 kernel: clocksource: Switched to clocksource tsc May 27 17:41:13.601765 kernel: Initialise system trusted keyrings May 27 17:41:13.601786 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 May 27 17:41:13.601803 kernel: Key type asymmetric registered May 27 17:41:13.601820 kernel: Asymmetric key parser 'x509' registered May 27 17:41:13.601838 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 27 17:41:13.601855 kernel: io scheduler mq-deadline registered May 27 17:41:13.601873 kernel: io scheduler kyber registered May 27 17:41:13.601890 kernel: io scheduler bfq registered May 27 17:41:13.601908 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 27 17:41:13.601926 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 27 17:41:13.602105 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver May 27 17:41:13.602127 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 May 27 17:41:13.602318 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver May 27 17:41:13.602340 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 27 17:41:13.602512 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver May 27 17:41:13.602534 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 17:41:13.602551 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 27 17:41:13.602569 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A May 27 17:41:13.602587 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A May 27 17:41:13.602616 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A May 27 17:41:13.602796 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) May 27 17:41:13.602825 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 27 17:41:13.602843 kernel: i8042: Warning: Keylock active May 27 17:41:13.602860 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 27 17:41:13.602878 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 27 17:41:13.603059 kernel: rtc_cmos 00:00: RTC can wake from S4 May 27 17:41:13.603230 kernel: rtc_cmos 00:00: registered as rtc0 May 27 17:41:13.605456 kernel: rtc_cmos 00:00: setting system clock to 2025-05-27T17:41:12 UTC (1748367672) May 27 17:41:13.605640 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram May 27 17:41:13.605663 kernel: intel_pstate: CPU model not supported May 27 17:41:13.605682 kernel: pstore: Using crash dump compression: deflate May 27 17:41:13.605699 kernel: pstore: Registered efi_pstore as persistent store backend May 27 17:41:13.605717 kernel: NET: Registered PF_INET6 protocol family May 27 17:41:13.605735 kernel: Segment Routing with IPv6 May 27 17:41:13.605753 kernel: In-situ OAM (IOAM) with IPv6 May 27 17:41:13.605775 kernel: NET: Registered PF_PACKET protocol family May 27 17:41:13.605793 kernel: Key type dns_resolver registered May 27 17:41:13.605810 kernel: IPI shorthand broadcast: enabled May 27 17:41:13.605828 kernel: sched_clock: Marking stable (3459004123, 136354039)->(3604816215, -9458053) May 27 17:41:13.605845 kernel: registered taskstats version 1 May 27 17:41:13.605862 kernel: Loading compiled-in X.509 certificates May 27 17:41:13.605880 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: 9507e5c390e18536b38d58c90da64baf0ac9837c' May 27 17:41:13.605898 kernel: Demotion targets for Node 0: null May 27 17:41:13.605916 kernel: Key type .fscrypt registered May 27 17:41:13.605938 kernel: Key type fscrypt-provisioning registered May 27 17:41:13.605956 kernel: ima: Allocated hash algorithm: sha1 May 27 17:41:13.605974 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 May 27 17:41:13.605993 kernel: ima: No architecture policies found May 27 17:41:13.606011 kernel: clk: Disabling unused clocks May 27 17:41:13.606030 kernel: Warning: unable to open an initial console. May 27 17:41:13.606048 kernel: Freeing unused kernel image (initmem) memory: 54416K May 27 17:41:13.606067 kernel: Write protecting the kernel read-only data: 24576k May 27 17:41:13.606089 kernel: Freeing unused kernel image (rodata/data gap) memory: 288K May 27 17:41:13.606107 kernel: Run /init as init process May 27 17:41:13.606126 kernel: with arguments: May 27 17:41:13.606144 kernel: /init May 27 17:41:13.606160 kernel: with environment: May 27 17:41:13.606178 kernel: HOME=/ May 27 17:41:13.606196 kernel: TERM=linux May 27 17:41:13.606214 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 17:41:13.606234 systemd[1]: Successfully made /usr/ read-only. May 27 17:41:13.606261 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 17:41:13.606296 systemd[1]: Detected virtualization google. May 27 17:41:13.606315 systemd[1]: Detected architecture x86-64. May 27 17:41:13.606334 systemd[1]: Running in initrd. May 27 17:41:13.606352 systemd[1]: No hostname configured, using default hostname. May 27 17:41:13.606372 systemd[1]: Hostname set to . May 27 17:41:13.606391 systemd[1]: Initializing machine ID from random generator. May 27 17:41:13.606415 systemd[1]: Queued start job for default target initrd.target. May 27 17:41:13.606452 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:41:13.606475 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:41:13.606497 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 17:41:13.606516 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 17:41:13.606537 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 17:41:13.606562 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 17:41:13.606584 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 17:41:13.606610 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 17:41:13.606631 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:41:13.606651 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 17:41:13.606671 systemd[1]: Reached target paths.target - Path Units. May 27 17:41:13.606691 systemd[1]: Reached target slices.target - Slice Units. May 27 17:41:13.606715 systemd[1]: Reached target swap.target - Swaps. May 27 17:41:13.606735 systemd[1]: Reached target timers.target - Timer Units. May 27 17:41:13.606754 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 17:41:13.606775 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 17:41:13.606795 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 17:41:13.606815 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 17:41:13.606835 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 17:41:13.606855 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 17:41:13.606880 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:41:13.606900 systemd[1]: Reached target sockets.target - Socket Units. May 27 17:41:13.606920 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 17:41:13.606941 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 17:41:13.606961 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 17:41:13.606981 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 17:41:13.607001 systemd[1]: Starting systemd-fsck-usr.service... May 27 17:41:13.607021 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 17:41:13.607045 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 17:41:13.607098 systemd-journald[208]: Collecting audit messages is disabled. May 27 17:41:13.607142 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:41:13.607163 systemd-journald[208]: Journal started May 27 17:41:13.607207 systemd-journald[208]: Runtime Journal (/run/log/journal/1b0ce5680a2b4a748724226378f71d0f) is 8M, max 148.9M, 140.9M free. May 27 17:41:13.627683 systemd[1]: Started systemd-journald.service - Journal Service. May 27 17:41:13.628468 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 17:41:13.629112 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:41:13.629655 systemd[1]: Finished systemd-fsck-usr.service. May 27 17:41:13.632628 systemd-modules-load[210]: Inserted module 'overlay' May 27 17:41:13.633549 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 17:41:13.638422 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 17:41:13.669578 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:41:13.743641 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 17:41:13.743683 kernel: Bridge firewalling registered May 27 17:41:13.674933 systemd-tmpfiles[220]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 17:41:13.705319 systemd-modules-load[210]: Inserted module 'br_netfilter' May 27 17:41:13.764825 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 17:41:13.786681 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:41:13.803598 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:41:13.824724 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 17:41:13.843237 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:41:13.865344 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 17:41:13.910367 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:41:13.917452 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 17:41:13.921620 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:41:13.935584 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 17:41:13.958574 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 17:41:13.969198 systemd-resolved[235]: Positive Trust Anchors: May 27 17:41:13.969207 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 17:41:13.969250 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 17:41:13.972752 systemd-resolved[235]: Defaulting to hostname 'linux'. May 27 17:41:13.974059 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 17:41:14.078410 dracut-cmdline[247]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 17:41:13.982497 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 17:41:14.145304 kernel: SCSI subsystem initialized May 27 17:41:14.162310 kernel: Loading iSCSI transport class v2.0-870. May 27 17:41:14.178304 kernel: iscsi: registered transport (tcp) May 27 17:41:14.210153 kernel: iscsi: registered transport (qla4xxx) May 27 17:41:14.210216 kernel: QLogic iSCSI HBA Driver May 27 17:41:14.232524 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 17:41:14.266056 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:41:14.267911 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 17:41:14.341873 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 17:41:14.358855 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 17:41:14.432307 kernel: raid6: avx2x4 gen() 18295 MB/s May 27 17:41:14.453306 kernel: raid6: avx2x2 gen() 18298 MB/s May 27 17:41:14.479294 kernel: raid6: avx2x1 gen() 14361 MB/s May 27 17:41:14.479330 kernel: raid6: using algorithm avx2x2 gen() 18298 MB/s May 27 17:41:14.506360 kernel: raid6: .... xor() 18626 MB/s, rmw enabled May 27 17:41:14.506411 kernel: raid6: using avx2x2 recovery algorithm May 27 17:41:14.534296 kernel: xor: automatically using best checksumming function avx May 27 17:41:14.721310 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 17:41:14.730180 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 17:41:14.741200 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:41:14.772524 systemd-udevd[455]: Using default interface naming scheme 'v255'. May 27 17:41:14.781524 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:41:14.792456 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 17:41:14.839312 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation May 27 17:41:14.870051 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 17:41:14.888791 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 17:41:14.993920 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:41:15.007448 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 17:41:15.101363 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues May 27 17:41:15.101646 kernel: cryptd: max_cpu_qlen set to 1000 May 27 17:41:15.113533 kernel: scsi host0: Virtio SCSI HBA May 27 17:41:15.127535 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 May 27 17:41:15.175291 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 27 17:41:15.191676 kernel: AES CTR mode by8 optimization enabled May 27 17:41:15.193427 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:41:15.230393 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) May 27 17:41:15.230715 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks May 27 17:41:15.193798 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:41:15.244386 kernel: sd 0:0:1:0: [sda] Write Protect is off May 27 17:41:15.244668 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 May 27 17:41:15.261299 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 27 17:41:15.264486 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:41:15.318404 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 17:41:15.318438 kernel: GPT:17805311 != 25165823 May 27 17:41:15.318462 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 17:41:15.318484 kernel: GPT:17805311 != 25165823 May 27 17:41:15.318507 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 17:41:15.318531 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 17:41:15.318556 kernel: sd 0:0:1:0: [sda] Attached SCSI disk May 27 17:41:15.300035 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:41:15.336281 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 17:41:15.395573 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. May 27 17:41:15.396147 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:41:15.425638 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 17:41:15.465765 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. May 27 17:41:15.496849 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. May 27 17:41:15.525455 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. May 27 17:41:15.535368 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. May 27 17:41:15.558414 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 17:41:15.578368 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:41:15.598356 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 17:41:15.616325 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 17:41:15.625394 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 17:41:15.664725 disk-uuid[609]: Primary Header is updated. May 27 17:41:15.664725 disk-uuid[609]: Secondary Entries is updated. May 27 17:41:15.664725 disk-uuid[609]: Secondary Header is updated. May 27 17:41:15.689453 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 17:41:15.679805 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 17:41:15.715314 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 17:41:16.732324 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 17:41:16.732408 disk-uuid[610]: The operation has completed successfully. May 27 17:41:16.807721 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 17:41:16.807868 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 17:41:16.856857 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 17:41:16.888026 sh[631]: Success May 27 17:41:16.923795 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 17:41:16.923865 kernel: device-mapper: uevent: version 1.0.3 May 27 17:41:16.924326 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 17:41:16.950294 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" May 27 17:41:17.023984 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 17:41:17.028372 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 17:41:17.057904 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 17:41:17.096790 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 17:41:17.096823 kernel: BTRFS: device fsid 7caef027-0915-4c01-a3d5-28eff70f7ebd devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (643) May 27 17:41:17.114577 kernel: BTRFS info (device dm-0): first mount of filesystem 7caef027-0915-4c01-a3d5-28eff70f7ebd May 27 17:41:17.114626 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 27 17:41:17.114652 kernel: BTRFS info (device dm-0): using free-space-tree May 27 17:41:17.145035 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 17:41:17.145712 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 17:41:17.158654 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 17:41:17.159540 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 17:41:17.183184 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 17:41:17.239297 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 (8:6) scanned by mount (666) May 27 17:41:17.257125 kernel: BTRFS info (device sda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:41:17.257174 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 27 17:41:17.257200 kernel: BTRFS info (device sda6): using free-space-tree May 27 17:41:17.282307 kernel: BTRFS info (device sda6): last unmount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:41:17.283689 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 17:41:17.293768 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 17:41:17.397036 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 17:41:17.423065 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 17:41:17.545743 systemd-networkd[812]: lo: Link UP May 27 17:41:17.545757 systemd-networkd[812]: lo: Gained carrier May 27 17:41:17.549647 systemd-networkd[812]: Enumeration completed May 27 17:41:17.555102 ignition[721]: Ignition 2.21.0 May 27 17:41:17.550394 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 17:41:17.555110 ignition[721]: Stage: fetch-offline May 27 17:41:17.551063 systemd-networkd[812]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:41:17.555145 ignition[721]: no configs at "/usr/lib/ignition/base.d" May 27 17:41:17.551071 systemd-networkd[812]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:41:17.555155 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 27 17:41:17.553620 systemd-networkd[812]: eth0: Link UP May 27 17:41:17.555241 ignition[721]: parsed url from cmdline: "" May 27 17:41:17.553627 systemd-networkd[812]: eth0: Gained carrier May 27 17:41:17.555245 ignition[721]: no config URL provided May 27 17:41:17.553643 systemd-networkd[812]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:41:17.555254 ignition[721]: reading system config file "/usr/lib/ignition/user.ign" May 27 17:41:17.568746 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 17:41:17.555282 ignition[721]: no config at "/usr/lib/ignition/user.ign" May 27 17:41:17.570363 systemd-networkd[812]: eth0: DHCPv4 address 10.128.0.9/32, gateway 10.128.0.1 acquired from 169.254.169.254 May 27 17:41:17.555293 ignition[721]: failed to fetch config: resource requires networking May 27 17:41:17.584994 systemd[1]: Reached target network.target - Network. May 27 17:41:17.555517 ignition[721]: Ignition finished successfully May 27 17:41:17.597425 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 27 17:41:17.645244 ignition[822]: Ignition 2.21.0 May 27 17:41:17.656187 unknown[822]: fetched base config from "system" May 27 17:41:17.645252 ignition[822]: Stage: fetch May 27 17:41:17.656195 unknown[822]: fetched base config from "system" May 27 17:41:17.645466 ignition[822]: no configs at "/usr/lib/ignition/base.d" May 27 17:41:17.656200 unknown[822]: fetched user config from "gcp" May 27 17:41:17.645478 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 27 17:41:17.659016 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 27 17:41:17.645588 ignition[822]: parsed url from cmdline: "" May 27 17:41:17.676085 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 17:41:17.645593 ignition[822]: no config URL provided May 27 17:41:17.720828 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 17:41:17.645600 ignition[822]: reading system config file "/usr/lib/ignition/user.ign" May 27 17:41:17.747114 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 17:41:17.645610 ignition[822]: no config at "/usr/lib/ignition/user.ign" May 27 17:41:17.778494 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 17:41:17.645646 ignition[822]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 May 27 17:41:17.789518 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 17:41:17.649918 ignition[822]: GET result: OK May 27 17:41:17.806384 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 17:41:17.650139 ignition[822]: parsing config with SHA512: 7855fac070f3b83b426df613dc871e20fee16b33ac5b78a70076e5f1524fbb37daef2fa643b7539dae92a137abfcf5f36b74cd4bcdc84b6ea2c76bfb1c57e031 May 27 17:41:17.822389 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 17:41:17.656790 ignition[822]: fetch: fetch complete May 27 17:41:17.838401 systemd[1]: Reached target sysinit.target - System Initialization. May 27 17:41:17.656797 ignition[822]: fetch: fetch passed May 27 17:41:17.851389 systemd[1]: Reached target basic.target - Basic System. May 27 17:41:17.656847 ignition[822]: Ignition finished successfully May 27 17:41:17.865458 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 17:41:17.713642 ignition[829]: Ignition 2.21.0 May 27 17:41:17.713650 ignition[829]: Stage: kargs May 27 17:41:17.713810 ignition[829]: no configs at "/usr/lib/ignition/base.d" May 27 17:41:17.713821 ignition[829]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 27 17:41:17.714898 ignition[829]: kargs: kargs passed May 27 17:41:17.714948 ignition[829]: Ignition finished successfully May 27 17:41:17.776491 ignition[836]: Ignition 2.21.0 May 27 17:41:17.776499 ignition[836]: Stage: disks May 27 17:41:17.776635 ignition[836]: no configs at "/usr/lib/ignition/base.d" May 27 17:41:17.776646 ignition[836]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 27 17:41:17.777459 ignition[836]: disks: disks passed May 27 17:41:17.777510 ignition[836]: Ignition finished successfully May 27 17:41:17.925756 systemd-fsck[844]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks May 27 17:41:18.109146 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 17:41:18.110901 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 17:41:18.392292 kernel: EXT4-fs (sda9): mounted filesystem bf93e767-f532-4480-b210-a196f7ac181e r/w with ordered data mode. Quota mode: none. May 27 17:41:18.393774 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 17:41:18.394542 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 17:41:18.408280 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 17:41:18.438673 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 17:41:18.484420 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 (8:6) scanned by mount (852) May 27 17:41:18.484470 kernel: BTRFS info (device sda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:41:18.484495 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 27 17:41:18.484518 kernel: BTRFS info (device sda6): using free-space-tree May 27 17:41:18.454013 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 17:41:18.454091 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 17:41:18.454130 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 17:41:18.501479 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 17:41:18.526646 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 17:41:18.551742 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 17:41:18.711017 systemd-networkd[812]: eth0: Gained IPv6LL May 27 17:41:18.773383 initrd-setup-root[876]: cut: /sysroot/etc/passwd: No such file or directory May 27 17:41:18.782829 initrd-setup-root[883]: cut: /sysroot/etc/group: No such file or directory May 27 17:41:18.793080 initrd-setup-root[890]: cut: /sysroot/etc/shadow: No such file or directory May 27 17:41:18.802402 initrd-setup-root[897]: cut: /sysroot/etc/gshadow: No such file or directory May 27 17:41:19.029111 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 17:41:19.040057 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 17:41:19.055253 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 17:41:19.079996 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 17:41:19.095423 kernel: BTRFS info (device sda6): last unmount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:41:19.125497 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 17:41:19.128865 ignition[964]: INFO : Ignition 2.21.0 May 27 17:41:19.147489 ignition[964]: INFO : Stage: mount May 27 17:41:19.147489 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:41:19.147489 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 27 17:41:19.147489 ignition[964]: INFO : mount: mount passed May 27 17:41:19.147489 ignition[964]: INFO : Ignition finished successfully May 27 17:41:19.141677 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 17:41:19.158350 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 17:41:19.202026 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 17:41:19.253305 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 (8:6) scanned by mount (976) May 27 17:41:19.253368 kernel: BTRFS info (device sda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:41:19.270509 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 27 17:41:19.270558 kernel: BTRFS info (device sda6): using free-space-tree May 27 17:41:19.282501 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 17:41:19.323662 ignition[993]: INFO : Ignition 2.21.0 May 27 17:41:19.323662 ignition[993]: INFO : Stage: files May 27 17:41:19.337381 ignition[993]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:41:19.337381 ignition[993]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 27 17:41:19.337381 ignition[993]: DEBUG : files: compiled without relabeling support, skipping May 27 17:41:19.337381 ignition[993]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 17:41:19.337381 ignition[993]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 17:41:19.337381 ignition[993]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 17:41:19.337381 ignition[993]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 17:41:19.337381 ignition[993]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 17:41:19.337381 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 27 17:41:19.337381 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 27 17:41:19.331007 unknown[993]: wrote ssh authorized keys file for user: core May 27 17:41:19.460352 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 17:41:19.608865 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 27 17:41:19.608865 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 27 17:41:19.638373 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 27 17:41:19.638373 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 17:41:19.638373 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 17:41:19.638373 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 17:41:19.638373 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 17:41:19.638373 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 17:41:19.638373 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 17:41:19.638373 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 17:41:19.638373 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 17:41:19.638373 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 17:41:19.638373 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 17:41:19.638373 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 17:41:19.638373 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 27 17:41:39.661227 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 27 17:41:40.053432 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 17:41:40.053432 ignition[993]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 27 17:41:40.089424 ignition[993]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 17:41:40.089424 ignition[993]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 17:41:40.089424 ignition[993]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 27 17:41:40.089424 ignition[993]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 27 17:41:40.089424 ignition[993]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 27 17:41:40.089424 ignition[993]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 17:41:40.089424 ignition[993]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 17:41:40.089424 ignition[993]: INFO : files: files passed May 27 17:41:40.089424 ignition[993]: INFO : Ignition finished successfully May 27 17:41:40.061514 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 17:41:40.080731 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 17:41:40.090538 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 17:41:40.138786 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 17:41:40.289460 initrd-setup-root-after-ignition[1021]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 17:41:40.289460 initrd-setup-root-after-ignition[1021]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 17:41:40.138917 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 17:41:40.324483 initrd-setup-root-after-ignition[1025]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 17:41:40.172906 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 17:41:40.195852 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 17:41:40.219589 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 17:41:40.326198 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 17:41:40.326401 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 17:41:40.348295 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 17:41:40.366585 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 17:41:40.384650 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 17:41:40.385858 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 17:41:40.470477 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 17:41:40.491628 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 17:41:40.532740 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 17:41:40.533169 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:41:40.551883 systemd[1]: Stopped target timers.target - Timer Units. May 27 17:41:40.570798 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 17:41:40.570998 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 17:41:40.611514 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 17:41:40.611903 systemd[1]: Stopped target basic.target - Basic System. May 27 17:41:40.627773 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 17:41:40.641784 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 17:41:40.658782 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 17:41:40.676763 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 17:41:40.693766 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 17:41:40.710758 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 17:41:40.726838 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 17:41:40.746800 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 17:41:40.762775 systemd[1]: Stopped target swap.target - Swaps. May 27 17:41:40.778773 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 17:41:40.778992 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 17:41:40.807822 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 17:41:40.816768 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:41:40.833739 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 17:41:40.833912 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:41:40.852770 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 17:41:40.852966 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 17:41:40.889754 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 17:41:40.890023 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 17:41:40.898792 systemd[1]: ignition-files.service: Deactivated successfully. May 27 17:41:40.898972 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 17:41:40.972478 ignition[1046]: INFO : Ignition 2.21.0 May 27 17:41:40.972478 ignition[1046]: INFO : Stage: umount May 27 17:41:40.972478 ignition[1046]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:41:40.972478 ignition[1046]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 27 17:41:40.972478 ignition[1046]: INFO : umount: umount passed May 27 17:41:40.972478 ignition[1046]: INFO : Ignition finished successfully May 27 17:41:40.918211 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 17:41:40.963771 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 17:41:40.980535 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 17:41:40.980785 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:41:41.019761 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 17:41:41.019950 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 17:41:41.046360 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 17:41:41.047769 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 17:41:41.047888 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 17:41:41.061040 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 17:41:41.061159 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 17:41:41.079511 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 17:41:41.079659 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 17:41:41.098093 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 17:41:41.098179 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 17:41:41.104702 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 17:41:41.104769 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 17:41:41.120702 systemd[1]: ignition-fetch.service: Deactivated successfully. May 27 17:41:41.120771 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 27 17:41:41.136707 systemd[1]: Stopped target network.target - Network. May 27 17:41:41.152686 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 17:41:41.152777 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 17:41:41.166723 systemd[1]: Stopped target paths.target - Path Units. May 27 17:41:41.183643 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 17:41:41.188442 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:41:41.197608 systemd[1]: Stopped target slices.target - Slice Units. May 27 17:41:41.214642 systemd[1]: Stopped target sockets.target - Socket Units. May 27 17:41:41.228684 systemd[1]: iscsid.socket: Deactivated successfully. May 27 17:41:41.228745 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 17:41:41.254627 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 17:41:41.254692 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 17:41:41.263713 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 17:41:41.263820 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 17:41:41.279721 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 17:41:41.279794 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 17:41:41.295696 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 17:41:41.295781 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 17:41:41.311859 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 17:41:41.336585 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 17:41:41.353989 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 17:41:41.354131 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 17:41:41.363803 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 17:41:41.364080 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 17:41:41.364219 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 17:41:41.379193 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 17:41:41.380602 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 17:41:41.394731 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 17:41:41.394808 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 17:41:41.411856 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 17:41:41.427668 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 17:41:41.427756 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 17:41:41.446801 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 17:41:41.446878 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 17:41:41.471774 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 17:41:41.471841 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 17:41:41.496664 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 17:41:41.496777 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:41:41.513752 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:41:41.531822 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 17:41:41.531954 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 17:41:41.952463 systemd-journald[208]: Received SIGTERM from PID 1 (systemd). May 27 17:41:41.532510 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 17:41:41.532679 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:41:41.552000 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 17:41:41.552087 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 17:41:41.558631 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 17:41:41.558689 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:41:41.585576 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 17:41:41.585673 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 17:41:41.610697 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 17:41:41.610801 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 17:41:41.637732 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 17:41:41.637837 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 17:41:41.681667 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 17:41:41.699410 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 17:41:41.699536 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:41:41.716815 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 17:41:41.716938 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:41:41.754782 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 27 17:41:41.754859 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:41:41.773740 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 17:41:41.773829 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:41:41.792628 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:41:41.792712 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:41:41.803387 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 27 17:41:41.803459 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 27 17:41:41.803503 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 27 17:41:41.803548 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 17:41:41.804043 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 17:41:41.804171 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 17:41:41.818969 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 17:41:41.819105 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 17:41:41.838857 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 17:41:41.855684 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 17:41:41.894498 systemd[1]: Switching root. May 27 17:41:42.275441 systemd-journald[208]: Journal stopped May 27 17:41:44.823034 kernel: SELinux: policy capability network_peer_controls=1 May 27 17:41:44.823094 kernel: SELinux: policy capability open_perms=1 May 27 17:41:44.823115 kernel: SELinux: policy capability extended_socket_class=1 May 27 17:41:44.823133 kernel: SELinux: policy capability always_check_network=0 May 27 17:41:44.823151 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 17:41:44.823168 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 17:41:44.823192 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 17:41:44.823210 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 17:41:44.823228 kernel: SELinux: policy capability userspace_initial_context=0 May 27 17:41:44.823246 kernel: audit: type=1403 audit(1748367702.652:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 17:41:44.823284 systemd[1]: Successfully loaded SELinux policy in 106.790ms. May 27 17:41:44.823308 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.687ms. May 27 17:41:44.823332 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 17:41:44.823358 systemd[1]: Detected virtualization google. May 27 17:41:44.823381 systemd[1]: Detected architecture x86-64. May 27 17:41:44.823402 systemd[1]: Detected first boot. May 27 17:41:44.823424 systemd[1]: Initializing machine ID from random generator. May 27 17:41:44.823444 zram_generator::config[1089]: No configuration found. May 27 17:41:44.823470 kernel: Guest personality initialized and is inactive May 27 17:41:44.823490 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 27 17:41:44.823509 kernel: Initialized host personality May 27 17:41:44.823527 kernel: NET: Registered PF_VSOCK protocol family May 27 17:41:44.823547 systemd[1]: Populated /etc with preset unit settings. May 27 17:41:44.823569 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 17:41:44.823589 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 17:41:44.823613 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 17:41:44.823633 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 17:41:44.823653 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 17:41:44.823673 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 17:41:44.823694 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 17:41:44.823714 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 17:41:44.823745 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 17:41:44.823770 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 17:41:44.823794 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 17:41:44.823815 systemd[1]: Created slice user.slice - User and Session Slice. May 27 17:41:44.823836 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:41:44.823857 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:41:44.823878 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 17:41:44.823899 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 17:41:44.823921 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 17:41:44.823949 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 17:41:44.823974 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 17:41:44.823995 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:41:44.824017 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 17:41:44.824038 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 17:41:44.824060 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 17:41:44.824081 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 17:41:44.824102 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 17:41:44.824128 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:41:44.824150 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 17:41:44.824171 systemd[1]: Reached target slices.target - Slice Units. May 27 17:41:44.824193 systemd[1]: Reached target swap.target - Swaps. May 27 17:41:44.824215 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 17:41:44.824236 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 17:41:44.824403 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 17:41:44.824433 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 17:41:44.824456 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 17:41:44.824479 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:41:44.824501 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 17:41:44.824521 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 17:41:44.824544 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 17:41:44.824569 systemd[1]: Mounting media.mount - External Media Directory... May 27 17:41:44.824590 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:41:44.824613 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 17:41:44.824636 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 17:41:44.824657 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 17:41:44.824680 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 17:41:44.824701 systemd[1]: Reached target machines.target - Containers. May 27 17:41:44.824724 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 17:41:44.824760 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:41:44.824783 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 17:41:44.824805 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 17:41:44.824827 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:41:44.824849 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 17:41:44.824871 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:41:44.824896 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 17:41:44.824918 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:41:44.824942 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 17:41:44.824967 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 17:41:44.824989 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 17:41:44.825011 kernel: fuse: init (API version 7.41) May 27 17:41:44.825032 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 17:41:44.825054 systemd[1]: Stopped systemd-fsck-usr.service. May 27 17:41:44.825076 kernel: ACPI: bus type drm_connector registered May 27 17:41:44.825097 kernel: loop: module loaded May 27 17:41:44.825119 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:41:44.825147 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 17:41:44.825171 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 17:41:44.825191 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 17:41:44.825212 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 17:41:44.825292 systemd-journald[1177]: Collecting audit messages is disabled. May 27 17:41:44.825342 systemd-journald[1177]: Journal started May 27 17:41:44.825383 systemd-journald[1177]: Runtime Journal (/run/log/journal/fbde7b6bd9b84bb7b5e406812a2d0864) is 8M, max 148.9M, 140.9M free. May 27 17:41:43.614454 systemd[1]: Queued start job for default target multi-user.target. May 27 17:41:43.634062 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 27 17:41:43.634782 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 17:41:44.848370 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 17:41:44.871826 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 17:41:44.887295 systemd[1]: verity-setup.service: Deactivated successfully. May 27 17:41:44.896298 systemd[1]: Stopped verity-setup.service. May 27 17:41:44.923324 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:41:44.935324 systemd[1]: Started systemd-journald.service - Journal Service. May 27 17:41:44.944945 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 17:41:44.954638 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 17:41:44.963629 systemd[1]: Mounted media.mount - External Media Directory. May 27 17:41:44.972613 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 17:41:44.981593 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 17:41:44.990601 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 17:41:44.999784 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 17:41:45.010862 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:41:45.021748 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 17:41:45.022054 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 17:41:45.032775 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:41:45.033055 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:41:45.044769 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 17:41:45.045044 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 17:41:45.053747 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:41:45.054012 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:41:45.064718 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 17:41:45.064982 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 17:41:45.073695 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:41:45.073975 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:41:45.082753 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 17:41:45.091854 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:41:45.102857 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 17:41:45.113803 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 17:41:45.124859 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:41:45.148260 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 17:41:45.159332 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 17:41:45.176390 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 17:41:45.185433 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 17:41:45.185654 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 17:41:45.195855 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 17:41:45.207875 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 17:41:45.216675 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:41:45.223580 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 17:41:45.235038 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 17:41:45.246476 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:41:45.249249 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 17:41:45.258437 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:41:45.261141 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:41:45.274581 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 17:41:45.277019 systemd-journald[1177]: Time spent on flushing to /var/log/journal/fbde7b6bd9b84bb7b5e406812a2d0864 is 53.798ms for 954 entries. May 27 17:41:45.277019 systemd-journald[1177]: System Journal (/var/log/journal/fbde7b6bd9b84bb7b5e406812a2d0864) is 8M, max 584.8M, 576.8M free. May 27 17:41:45.360624 systemd-journald[1177]: Received client request to flush runtime journal. May 27 17:41:45.360687 kernel: loop0: detected capacity change from 0 to 146240 May 27 17:41:45.296576 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 17:41:45.310492 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 17:41:45.320671 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 17:41:45.331860 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 17:41:45.356827 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 17:41:45.372219 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 17:41:45.383082 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 17:41:45.394253 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:41:45.453213 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 17:41:45.455284 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 17:41:45.464291 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 17:41:45.477187 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. May 27 17:41:45.478540 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. May 27 17:41:45.489378 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:41:45.501373 kernel: loop1: detected capacity change from 0 to 224512 May 27 17:41:45.510555 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 17:41:45.568328 kernel: loop2: detected capacity change from 0 to 52072 May 27 17:41:45.602781 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 17:41:45.616259 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 17:41:45.634315 kernel: loop3: detected capacity change from 0 to 113872 May 27 17:41:45.707342 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. May 27 17:41:45.708288 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. May 27 17:41:45.725879 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:41:45.742293 kernel: loop4: detected capacity change from 0 to 146240 May 27 17:41:45.786296 kernel: loop5: detected capacity change from 0 to 224512 May 27 17:41:45.829578 kernel: loop6: detected capacity change from 0 to 52072 May 27 17:41:45.869311 kernel: loop7: detected capacity change from 0 to 113872 May 27 17:41:45.903204 (sd-merge)[1237]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. May 27 17:41:45.904171 (sd-merge)[1237]: Merged extensions into '/usr'. May 27 17:41:45.912956 systemd[1]: Reload requested from client PID 1212 ('systemd-sysext') (unit systemd-sysext.service)... May 27 17:41:45.914495 systemd[1]: Reloading... May 27 17:41:46.087297 zram_generator::config[1263]: No configuration found. May 27 17:41:46.349694 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:41:46.471293 ldconfig[1207]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 17:41:46.585097 systemd[1]: Reloading finished in 669 ms. May 27 17:41:46.612737 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 17:41:46.621949 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 17:41:46.644483 systemd[1]: Starting ensure-sysext.service... May 27 17:41:46.655473 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 17:41:46.683311 systemd[1]: Reload requested from client PID 1303 ('systemctl') (unit ensure-sysext.service)... May 27 17:41:46.683521 systemd[1]: Reloading... May 27 17:41:46.722224 systemd-tmpfiles[1304]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 17:41:46.722315 systemd-tmpfiles[1304]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 17:41:46.722822 systemd-tmpfiles[1304]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 17:41:46.723370 systemd-tmpfiles[1304]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 17:41:46.725181 systemd-tmpfiles[1304]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 17:41:46.725826 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. May 27 17:41:46.725953 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. May 27 17:41:46.735756 systemd-tmpfiles[1304]: Detected autofs mount point /boot during canonicalization of boot. May 27 17:41:46.737479 systemd-tmpfiles[1304]: Skipping /boot May 27 17:41:46.785796 systemd-tmpfiles[1304]: Detected autofs mount point /boot during canonicalization of boot. May 27 17:41:46.785821 systemd-tmpfiles[1304]: Skipping /boot May 27 17:41:46.813298 zram_generator::config[1327]: No configuration found. May 27 17:41:46.966110 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:41:47.079763 systemd[1]: Reloading finished in 395 ms. May 27 17:41:47.096250 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 17:41:47.121505 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:41:47.140528 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:41:47.154081 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 17:41:47.169802 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 17:41:47.190549 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 17:41:47.202420 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:41:47.215090 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 17:41:47.232481 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:41:47.233058 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:41:47.237202 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:41:47.255426 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:41:47.270183 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:41:47.280572 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:41:47.281471 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:41:47.287130 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 17:41:47.296526 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:41:47.317024 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:41:47.317589 augenrules[1402]: No rules May 27 17:41:47.318237 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:41:47.329915 systemd-udevd[1390]: Using default interface naming scheme 'v255'. May 27 17:41:47.330633 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:41:47.331309 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:41:47.342392 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 17:41:47.354102 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:41:47.354499 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:41:47.365916 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:41:47.366701 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:41:47.383904 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 17:41:47.396067 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 17:41:47.405391 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 17:41:47.416092 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:41:47.446634 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:41:47.446992 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:41:47.447334 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:41:47.447516 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:41:47.453394 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 17:41:47.462423 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:41:47.462698 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:41:47.466322 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 17:41:47.475402 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 17:41:47.475613 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:41:47.495838 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:41:47.502396 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:41:47.509682 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:41:47.514685 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:41:47.518665 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 17:41:47.543084 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:41:47.560616 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:41:47.574669 systemd[1]: Starting setup-oem.service - Setup OEM... May 27 17:41:47.586536 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:41:47.587019 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:41:47.588413 systemd[1]: Reached target time-set.target - System Time Set. May 27 17:41:47.597546 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 17:41:47.597718 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:41:47.606587 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 17:41:47.616629 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 17:41:47.622848 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 17:41:47.632642 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:41:47.632944 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:41:47.644672 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:41:47.646329 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:41:47.647037 augenrules[1439]: /sbin/augenrules: No change May 27 17:41:47.657801 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:41:47.658684 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:41:47.700860 systemd[1]: Finished ensure-sysext.service. May 27 17:41:47.709312 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:41:47.711424 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:41:47.721311 augenrules[1478]: No rules May 27 17:41:47.730424 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:41:47.731163 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:41:47.741015 systemd[1]: Finished setup-oem.service - Setup OEM. May 27 17:41:47.747209 systemd-networkd[1436]: lo: Link UP May 27 17:41:47.749599 systemd-networkd[1436]: lo: Gained carrier May 27 17:41:47.752955 systemd-networkd[1436]: Enumeration completed May 27 17:41:47.755739 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... May 27 17:41:47.765510 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 17:41:47.773614 systemd-resolved[1384]: Positive Trust Anchors: May 27 17:41:47.774071 systemd-resolved[1384]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 17:41:47.774237 systemd-resolved[1384]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 17:41:47.777572 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 17:41:47.791424 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 17:41:47.803830 systemd-resolved[1384]: Defaulting to hostname 'linux'. May 27 17:41:47.817043 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 17:41:47.826659 systemd[1]: Reached target network.target - Network. May 27 17:41:47.834418 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 17:41:47.844459 systemd[1]: Reached target sysinit.target - System Initialization. May 27 17:41:47.853582 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 17:41:47.864571 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 17:41:47.875412 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 27 17:41:47.885677 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 17:41:47.894632 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 17:41:47.905429 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 17:41:47.915464 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 17:41:47.915528 systemd[1]: Reached target paths.target - Path Units. May 27 17:41:47.923436 systemd[1]: Reached target timers.target - Timer Units. May 27 17:41:47.933355 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 17:41:47.946668 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 17:41:47.960807 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 17:41:47.963030 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:41:47.963042 systemd-networkd[1436]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:41:47.964816 systemd-networkd[1436]: eth0: Link UP May 27 17:41:47.965549 systemd-networkd[1436]: eth0: Gained carrier May 27 17:41:47.965586 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:41:47.971677 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 17:41:47.980358 systemd-networkd[1436]: eth0: DHCPv4 address 10.128.0.9/32, gateway 10.128.0.1 acquired from 169.254.169.254 May 27 17:41:47.983442 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 17:41:47.993852 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 17:41:48.005805 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. May 27 17:41:48.017756 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 17:41:48.028680 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 17:41:48.045292 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 May 27 17:41:48.054255 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 17:41:48.055326 systemd[1]: Condition check resulted in dev-tpmrm0.device - /dev/tpmrm0 being skipped. May 27 17:41:48.068309 kernel: mousedev: PS/2 mouse device common for all mice May 27 17:41:48.079299 kernel: ACPI: button: Power Button [PWRF] May 27 17:41:48.098768 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 May 27 17:41:48.098898 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr May 27 17:41:48.103202 systemd[1]: Reached target tpm2.target - Trusted Platform Module. May 27 17:41:48.126406 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 17:41:48.135297 kernel: ACPI: button: Sleep Button [SLPF] May 27 17:41:48.139461 systemd[1]: Reached target sockets.target - Socket Units. May 27 17:41:48.148476 systemd[1]: Reached target basic.target - Basic System. May 27 17:41:48.156872 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 17:41:48.156940 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 17:41:48.159094 systemd[1]: Starting containerd.service - containerd container runtime... May 27 17:41:48.170225 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 27 17:41:48.181755 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 17:41:48.193480 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 17:41:48.211158 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 17:41:48.235586 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 17:41:48.235737 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 17:41:48.238633 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 27 17:41:48.249564 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 17:41:48.279444 systemd[1]: Started ntpd.service - Network Time Service. May 27 17:41:48.296511 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 17:41:48.314839 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Refreshing passwd entry cache May 27 17:41:48.309954 oslogin_cache_refresh[1526]: Refreshing passwd entry cache May 27 17:41:48.316493 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 17:41:48.324724 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Failure getting users, quitting May 27 17:41:48.324724 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 17:41:48.324724 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Refreshing group entry cache May 27 17:41:48.324003 oslogin_cache_refresh[1526]: Failure getting users, quitting May 27 17:41:48.324030 oslogin_cache_refresh[1526]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 17:41:48.324092 oslogin_cache_refresh[1526]: Refreshing group entry cache May 27 17:41:48.325762 jq[1523]: false May 27 17:41:48.329159 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 17:41:48.334009 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Failure getting groups, quitting May 27 17:41:48.334009 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 17:41:48.332462 oslogin_cache_refresh[1526]: Failure getting groups, quitting May 27 17:41:48.332480 oslogin_cache_refresh[1526]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 17:41:48.351556 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 17:41:48.361797 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). May 27 17:41:48.364657 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 17:41:48.367412 systemd[1]: Starting update-engine.service - Update Engine... May 27 17:41:48.378567 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 17:41:48.395063 kernel: EDAC MC: Ver: 3.0.0 May 27 17:41:48.403963 extend-filesystems[1525]: Found loop4 May 27 17:41:48.410496 extend-filesystems[1525]: Found loop5 May 27 17:41:48.410496 extend-filesystems[1525]: Found loop6 May 27 17:41:48.410496 extend-filesystems[1525]: Found loop7 May 27 17:41:48.410496 extend-filesystems[1525]: Found sda May 27 17:41:48.410496 extend-filesystems[1525]: Found sda1 May 27 17:41:48.410496 extend-filesystems[1525]: Found sda2 May 27 17:41:48.410496 extend-filesystems[1525]: Found sda3 May 27 17:41:48.410496 extend-filesystems[1525]: Found usr May 27 17:41:48.410496 extend-filesystems[1525]: Found sda4 May 27 17:41:48.410496 extend-filesystems[1525]: Found sda6 May 27 17:41:48.410496 extend-filesystems[1525]: Found sda7 May 27 17:41:48.410496 extend-filesystems[1525]: Found sda9 May 27 17:41:48.410496 extend-filesystems[1525]: Checking size of /dev/sda9 May 27 17:41:48.558463 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks May 27 17:41:48.558600 coreos-metadata[1513]: May 27 17:41:48.471 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 May 27 17:41:48.558600 coreos-metadata[1513]: May 27 17:41:48.475 INFO Fetch successful May 27 17:41:48.558600 coreos-metadata[1513]: May 27 17:41:48.475 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 May 27 17:41:48.558600 coreos-metadata[1513]: May 27 17:41:48.479 INFO Fetch successful May 27 17:41:48.558600 coreos-metadata[1513]: May 27 17:41:48.479 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 May 27 17:41:48.558600 coreos-metadata[1513]: May 27 17:41:48.481 INFO Fetch successful May 27 17:41:48.558600 coreos-metadata[1513]: May 27 17:41:48.481 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 May 27 17:41:48.558600 coreos-metadata[1513]: May 27 17:41:48.484 INFO Fetch successful May 27 17:41:48.559034 extend-filesystems[1525]: Resized partition /dev/sda9 May 27 17:41:48.410662 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 17:41:48.570761 extend-filesystems[1561]: resize2fs 1.47.2 (1-Jan-2025) May 27 17:41:48.420758 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 17:41:48.581139 jq[1550]: true May 27 17:41:48.667452 kernel: EXT4-fs (sda9): resized filesystem to 2538491 May 27 17:41:48.421987 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 17:41:48.423787 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 27 17:41:48.668056 update_engine[1549]: I20250527 17:41:48.568722 1549 main.cc:92] Flatcar Update Engine starting May 27 17:41:48.424147 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 27 17:41:48.443499 systemd[1]: motdgen.service: Deactivated successfully. May 27 17:41:48.444011 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 17:41:48.689553 jq[1562]: true May 27 17:41:48.689865 extend-filesystems[1561]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 27 17:41:48.689865 extend-filesystems[1561]: old_desc_blocks = 1, new_desc_blocks = 2 May 27 17:41:48.689865 extend-filesystems[1561]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. May 27 17:41:48.464877 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 17:41:48.732207 extend-filesystems[1525]: Resized filesystem in /dev/sda9 May 27 17:41:48.465204 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 17:41:48.527726 (ntainerd)[1563]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 17:41:48.612239 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. May 27 17:41:48.631611 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 17:41:48.689054 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 17:41:48.695254 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 17:41:48.747897 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 17:41:48.769714 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:41:48.800466 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 27 17:41:48.801574 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 17:41:48.865862 tar[1559]: linux-amd64/LICENSE May 27 17:41:48.865862 tar[1559]: linux-amd64/helm May 27 17:41:48.866918 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 17:41:48.903307 bash[1604]: Updated "/home/core/.ssh/authorized_keys" May 27 17:41:48.911189 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 17:41:48.941097 systemd[1]: Starting sshkeys.service... May 27 17:41:49.010417 dbus-daemon[1516]: [system] SELinux support is enabled May 27 17:41:49.013440 dbus-daemon[1516]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1436 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 27 17:41:49.016376 update_engine[1549]: I20250527 17:41:49.014937 1549 update_check_scheduler.cc:74] Next update check in 10m37s May 27 17:41:49.016155 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 17:41:49.028762 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 17:41:49.029359 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 17:41:49.040877 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 17:41:49.040913 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 17:41:49.085940 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 27 17:41:49.089118 dbus-daemon[1516]: [system] Successfully activated service 'org.freedesktop.systemd1' May 27 17:41:49.101750 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 27 17:41:49.113694 systemd[1]: Started update-engine.service - Update Engine. May 27 17:41:49.136280 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 27 17:41:49.147255 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 17:41:49.236174 ntpd[1537]: ntpd 4.2.8p17@1.4004-o Tue May 27 14:54:35 UTC 2025 (1): Starting May 27 17:41:49.236729 ntpd[1537]: 27 May 17:41:49 ntpd[1537]: ntpd 4.2.8p17@1.4004-o Tue May 27 14:54:35 UTC 2025 (1): Starting May 27 17:41:49.236729 ntpd[1537]: 27 May 17:41:49 ntpd[1537]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 27 17:41:49.236729 ntpd[1537]: 27 May 17:41:49 ntpd[1537]: ---------------------------------------------------- May 27 17:41:49.236729 ntpd[1537]: 27 May 17:41:49 ntpd[1537]: ntp-4 is maintained by Network Time Foundation, May 27 17:41:49.236729 ntpd[1537]: 27 May 17:41:49 ntpd[1537]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 27 17:41:49.236213 ntpd[1537]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 27 17:41:49.236227 ntpd[1537]: ---------------------------------------------------- May 27 17:41:49.239085 ntpd[1537]: 27 May 17:41:49 ntpd[1537]: corporation. Support and training for ntp-4 are May 27 17:41:49.239085 ntpd[1537]: 27 May 17:41:49 ntpd[1537]: available at https://www.nwtime.org/support May 27 17:41:49.239085 ntpd[1537]: 27 May 17:41:49 ntpd[1537]: ---------------------------------------------------- May 27 17:41:49.236240 ntpd[1537]: ntp-4 is maintained by Network Time Foundation, May 27 17:41:49.236254 ntpd[1537]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 27 17:41:49.236281 ntpd[1537]: corporation. Support and training for ntp-4 are May 27 17:41:49.238649 ntpd[1537]: available at https://www.nwtime.org/support May 27 17:41:49.238664 ntpd[1537]: ---------------------------------------------------- May 27 17:41:49.254374 ntpd[1537]: proto: precision = 0.106 usec (-23) May 27 17:41:49.256183 ntpd[1537]: 27 May 17:41:49 ntpd[1537]: proto: precision = 0.106 usec (-23) May 27 17:41:49.256504 ntpd[1537]: basedate set to 2025-05-15 May 27 17:41:49.258107 ntpd[1537]: 27 May 17:41:49 ntpd[1537]: basedate set to 2025-05-15 May 27 17:41:49.258107 ntpd[1537]: 27 May 17:41:49 ntpd[1537]: gps base set to 2025-05-18 (week 2367) May 27 17:41:49.256532 ntpd[1537]: gps base set to 2025-05-18 (week 2367) May 27 17:41:49.274529 ntpd[1537]: Listen and drop on 0 v6wildcard [::]:123 May 27 17:41:49.276461 ntpd[1537]: 27 May 17:41:49 ntpd[1537]: Listen and drop on 0 v6wildcard [::]:123 May 27 17:41:49.276461 ntpd[1537]: 27 May 17:41:49 ntpd[1537]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 27 17:41:49.276461 ntpd[1537]: 27 May 17:41:49 ntpd[1537]: Listen normally on 2 lo 127.0.0.1:123 May 27 17:41:49.276461 ntpd[1537]: 27 May 17:41:49 ntpd[1537]: Listen normally on 3 eth0 10.128.0.9:123 May 27 17:41:49.276461 ntpd[1537]: 27 May 17:41:49 ntpd[1537]: Listen normally on 4 lo [::1]:123 May 27 17:41:49.276461 ntpd[1537]: 27 May 17:41:49 ntpd[1537]: bind(21) AF_INET6 fe80::4001:aff:fe80:9%2#123 flags 0x11 failed: Cannot assign requested address May 27 17:41:49.276461 ntpd[1537]: 27 May 17:41:49 ntpd[1537]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:9%2#123 May 27 17:41:49.276461 ntpd[1537]: 27 May 17:41:49 ntpd[1537]: failed to init interface for address fe80::4001:aff:fe80:9%2 May 27 17:41:49.276461 ntpd[1537]: 27 May 17:41:49 ntpd[1537]: Listening on routing socket on fd #21 for interface updates May 27 17:41:49.274602 ntpd[1537]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 27 17:41:49.274830 ntpd[1537]: Listen normally on 2 lo 127.0.0.1:123 May 27 17:41:49.274878 ntpd[1537]: Listen normally on 3 eth0 10.128.0.9:123 May 27 17:41:49.274933 ntpd[1537]: Listen normally on 4 lo [::1]:123 May 27 17:41:49.274988 ntpd[1537]: bind(21) AF_INET6 fe80::4001:aff:fe80:9%2#123 flags 0x11 failed: Cannot assign requested address May 27 17:41:49.275016 ntpd[1537]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:9%2#123 May 27 17:41:49.275036 ntpd[1537]: failed to init interface for address fe80::4001:aff:fe80:9%2 May 27 17:41:49.275077 ntpd[1537]: Listening on routing socket on fd #21 for interface updates May 27 17:41:49.309854 ntpd[1537]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 27 17:41:49.313079 ntpd[1537]: 27 May 17:41:49 ntpd[1537]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 27 17:41:49.313079 ntpd[1537]: 27 May 17:41:49 ntpd[1537]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 27 17:41:49.309901 ntpd[1537]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 27 17:41:49.366474 systemd-networkd[1436]: eth0: Gained IPv6LL May 27 17:41:49.380001 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 17:41:49.380746 systemd[1]: Reached target network-online.target - Network is Online. May 27 17:41:49.388510 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:41:49.391869 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 17:41:49.397064 systemd[1]: Starting oem-gce.service - GCE Linux Agent... May 27 17:41:49.449762 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:41:49.475864 coreos-metadata[1615]: May 27 17:41:49.475 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 May 27 17:41:49.479253 coreos-metadata[1615]: May 27 17:41:49.479 INFO Fetch failed with 404: resource not found May 27 17:41:49.484663 coreos-metadata[1615]: May 27 17:41:49.484 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 May 27 17:41:49.484663 coreos-metadata[1615]: May 27 17:41:49.484 INFO Fetch successful May 27 17:41:49.484663 coreos-metadata[1615]: May 27 17:41:49.484 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 May 27 17:41:49.488407 coreos-metadata[1615]: May 27 17:41:49.487 INFO Fetch failed with 404: resource not found May 27 17:41:49.488407 coreos-metadata[1615]: May 27 17:41:49.487 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 May 27 17:41:49.488838 coreos-metadata[1615]: May 27 17:41:49.488 INFO Fetch failed with 404: resource not found May 27 17:41:49.488838 coreos-metadata[1615]: May 27 17:41:49.488 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 May 27 17:41:49.497298 coreos-metadata[1615]: May 27 17:41:49.491 INFO Fetch successful May 27 17:41:49.499608 unknown[1615]: wrote ssh authorized keys file for user: core May 27 17:41:49.522669 init.sh[1628]: + '[' -e /etc/default/instance_configs.cfg.template ']' May 27 17:41:49.526994 init.sh[1628]: + echo -e '[InstanceSetup]\nset_host_keys = false' May 27 17:41:49.526994 init.sh[1628]: + /usr/bin/google_instance_setup May 27 17:41:49.586122 update-ssh-keys[1634]: Updated "/home/core/.ssh/authorized_keys" May 27 17:41:49.591975 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 27 17:41:49.613366 systemd[1]: Finished sshkeys.service. May 27 17:41:49.635107 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 17:41:49.801593 containerd[1563]: time="2025-05-27T17:41:49Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 17:41:49.823205 containerd[1563]: time="2025-05-27T17:41:49.821802513Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 17:41:49.828246 locksmithd[1618]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 17:41:49.913555 containerd[1563]: time="2025-05-27T17:41:49.911130402Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.966µs" May 27 17:41:49.918196 containerd[1563]: time="2025-05-27T17:41:49.917601744Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 17:41:49.918196 containerd[1563]: time="2025-05-27T17:41:49.917661281Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 17:41:49.918196 containerd[1563]: time="2025-05-27T17:41:49.917890109Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 17:41:49.918196 containerd[1563]: time="2025-05-27T17:41:49.917928528Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 17:41:49.918196 containerd[1563]: time="2025-05-27T17:41:49.917966965Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 17:41:49.918196 containerd[1563]: time="2025-05-27T17:41:49.918056503Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 17:41:49.918196 containerd[1563]: time="2025-05-27T17:41:49.918073750Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 17:41:49.931684 containerd[1563]: time="2025-05-27T17:41:49.930901846Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 17:41:49.931684 containerd[1563]: time="2025-05-27T17:41:49.930997133Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 17:41:49.931684 containerd[1563]: time="2025-05-27T17:41:49.931025407Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 17:41:49.931684 containerd[1563]: time="2025-05-27T17:41:49.931042191Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 17:41:49.931684 containerd[1563]: time="2025-05-27T17:41:49.931202468Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 17:41:49.931684 containerd[1563]: time="2025-05-27T17:41:49.931561506Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 17:41:49.931684 containerd[1563]: time="2025-05-27T17:41:49.931613836Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 17:41:49.931684 containerd[1563]: time="2025-05-27T17:41:49.931633136Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 17:41:49.939292 containerd[1563]: time="2025-05-27T17:41:49.933389376Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 17:41:49.939292 containerd[1563]: time="2025-05-27T17:41:49.933798204Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 17:41:49.939292 containerd[1563]: time="2025-05-27T17:41:49.933900610Z" level=info msg="metadata content store policy set" policy=shared May 27 17:41:49.954609 containerd[1563]: time="2025-05-27T17:41:49.953485067Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 17:41:49.954609 containerd[1563]: time="2025-05-27T17:41:49.953646004Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 17:41:49.954609 containerd[1563]: time="2025-05-27T17:41:49.953673526Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 17:41:49.954609 containerd[1563]: time="2025-05-27T17:41:49.953693149Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 17:41:49.954609 containerd[1563]: time="2025-05-27T17:41:49.953719559Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 17:41:49.954609 containerd[1563]: time="2025-05-27T17:41:49.953742611Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 17:41:49.954609 containerd[1563]: time="2025-05-27T17:41:49.953773592Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 17:41:49.954609 containerd[1563]: time="2025-05-27T17:41:49.953795278Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 17:41:49.954609 containerd[1563]: time="2025-05-27T17:41:49.953815147Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 17:41:49.954609 containerd[1563]: time="2025-05-27T17:41:49.953832433Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 17:41:49.954609 containerd[1563]: time="2025-05-27T17:41:49.953848083Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 17:41:49.954609 containerd[1563]: time="2025-05-27T17:41:49.953868928Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 17:41:49.954609 containerd[1563]: time="2025-05-27T17:41:49.954055700Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 17:41:49.954609 containerd[1563]: time="2025-05-27T17:41:49.954094954Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 17:41:49.955217 containerd[1563]: time="2025-05-27T17:41:49.954133042Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 17:41:49.955217 containerd[1563]: time="2025-05-27T17:41:49.954154147Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 17:41:49.955217 containerd[1563]: time="2025-05-27T17:41:49.954173361Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 17:41:49.955217 containerd[1563]: time="2025-05-27T17:41:49.954190661Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 17:41:49.955217 containerd[1563]: time="2025-05-27T17:41:49.954210480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 17:41:49.955217 containerd[1563]: time="2025-05-27T17:41:49.954228026Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 17:41:49.955217 containerd[1563]: time="2025-05-27T17:41:49.954260975Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 17:41:49.955217 containerd[1563]: time="2025-05-27T17:41:49.954318298Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 17:41:49.955217 containerd[1563]: time="2025-05-27T17:41:49.954338058Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 17:41:49.955217 containerd[1563]: time="2025-05-27T17:41:49.954461615Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 17:41:49.955217 containerd[1563]: time="2025-05-27T17:41:49.954490566Z" level=info msg="Start snapshots syncer" May 27 17:41:49.955217 containerd[1563]: time="2025-05-27T17:41:49.954549070Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 17:41:49.965142 containerd[1563]: time="2025-05-27T17:41:49.956262665Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 17:41:49.965142 containerd[1563]: time="2025-05-27T17:41:49.960446219Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 17:41:49.965476 containerd[1563]: time="2025-05-27T17:41:49.960588482Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 17:41:49.965476 containerd[1563]: time="2025-05-27T17:41:49.960947388Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 17:41:49.965476 containerd[1563]: time="2025-05-27T17:41:49.960985306Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 17:41:49.965476 containerd[1563]: time="2025-05-27T17:41:49.961006911Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 17:41:49.965476 containerd[1563]: time="2025-05-27T17:41:49.961027695Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 17:41:49.965476 containerd[1563]: time="2025-05-27T17:41:49.961047385Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 17:41:49.965476 containerd[1563]: time="2025-05-27T17:41:49.961076195Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 17:41:49.965476 containerd[1563]: time="2025-05-27T17:41:49.961096289Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 17:41:49.965476 containerd[1563]: time="2025-05-27T17:41:49.961133391Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 17:41:49.965476 containerd[1563]: time="2025-05-27T17:41:49.961151425Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 17:41:49.965476 containerd[1563]: time="2025-05-27T17:41:49.961169302Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 17:41:49.965476 containerd[1563]: time="2025-05-27T17:41:49.961248718Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 17:41:49.965476 containerd[1563]: time="2025-05-27T17:41:49.961288951Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 17:41:49.965476 containerd[1563]: time="2025-05-27T17:41:49.961308977Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 17:41:49.966016 containerd[1563]: time="2025-05-27T17:41:49.961324447Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 17:41:49.966016 containerd[1563]: time="2025-05-27T17:41:49.961339436Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 17:41:49.966016 containerd[1563]: time="2025-05-27T17:41:49.961354908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 17:41:49.966016 containerd[1563]: time="2025-05-27T17:41:49.961371653Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 17:41:49.966016 containerd[1563]: time="2025-05-27T17:41:49.961397408Z" level=info msg="runtime interface created" May 27 17:41:49.966016 containerd[1563]: time="2025-05-27T17:41:49.961406723Z" level=info msg="created NRI interface" May 27 17:41:49.966016 containerd[1563]: time="2025-05-27T17:41:49.961422055Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 17:41:49.966016 containerd[1563]: time="2025-05-27T17:41:49.961451746Z" level=info msg="Connect containerd service" May 27 17:41:49.966016 containerd[1563]: time="2025-05-27T17:41:49.961491002Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 17:41:49.981716 containerd[1563]: time="2025-05-27T17:41:49.978664023Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 17:41:50.031011 systemd-logind[1546]: Watching system buttons on /dev/input/event2 (Power Button) May 27 17:41:50.031054 systemd-logind[1546]: Watching system buttons on /dev/input/event3 (Sleep Button) May 27 17:41:50.031088 systemd-logind[1546]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 17:41:50.031438 systemd-logind[1546]: New seat seat0. May 27 17:41:50.036009 systemd[1]: Started systemd-logind.service - User Login Management. May 27 17:41:50.202649 sshd_keygen[1556]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 17:41:50.289889 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 27 17:41:50.292044 dbus-daemon[1516]: [system] Successfully activated service 'org.freedesktop.hostname1' May 27 17:41:50.295648 dbus-daemon[1516]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1617 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 27 17:41:50.310644 systemd[1]: Starting polkit.service - Authorization Manager... May 27 17:41:50.410026 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 17:41:50.425676 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 17:41:50.436691 systemd[1]: Started sshd@0-10.128.0.9:22-139.178.68.195:41060.service - OpenSSH per-connection server daemon (139.178.68.195:41060). May 27 17:41:50.505943 systemd[1]: issuegen.service: Deactivated successfully. May 27 17:41:50.506349 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 17:41:50.521677 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 17:41:50.601520 containerd[1563]: time="2025-05-27T17:41:50.599025079Z" level=info msg="Start subscribing containerd event" May 27 17:41:50.601520 containerd[1563]: time="2025-05-27T17:41:50.599099091Z" level=info msg="Start recovering state" May 27 17:41:50.601520 containerd[1563]: time="2025-05-27T17:41:50.599251489Z" level=info msg="Start event monitor" May 27 17:41:50.601520 containerd[1563]: time="2025-05-27T17:41:50.599297283Z" level=info msg="Start cni network conf syncer for default" May 27 17:41:50.601520 containerd[1563]: time="2025-05-27T17:41:50.599311330Z" level=info msg="Start streaming server" May 27 17:41:50.601520 containerd[1563]: time="2025-05-27T17:41:50.599328138Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 17:41:50.601520 containerd[1563]: time="2025-05-27T17:41:50.599342633Z" level=info msg="runtime interface starting up..." May 27 17:41:50.601520 containerd[1563]: time="2025-05-27T17:41:50.599352678Z" level=info msg="starting plugins..." May 27 17:41:50.601520 containerd[1563]: time="2025-05-27T17:41:50.599373482Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 17:41:50.601366 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 17:41:50.607769 containerd[1563]: time="2025-05-27T17:41:50.606425130Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 17:41:50.616450 containerd[1563]: time="2025-05-27T17:41:50.608054038Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 17:41:50.625436 containerd[1563]: time="2025-05-27T17:41:50.625388432Z" level=info msg="containerd successfully booted in 0.828230s" May 27 17:41:50.632500 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 17:41:50.648481 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 17:41:50.657724 systemd[1]: Reached target getty.target - Login Prompts. May 27 17:41:50.666042 systemd[1]: Started containerd.service - containerd container runtime. May 27 17:41:50.801304 polkitd[1666]: Started polkitd version 126 May 27 17:41:50.820042 polkitd[1666]: Loading rules from directory /etc/polkit-1/rules.d May 27 17:41:50.821943 polkitd[1666]: Loading rules from directory /run/polkit-1/rules.d May 27 17:41:50.822018 polkitd[1666]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 27 17:41:50.822629 polkitd[1666]: Loading rules from directory /usr/local/share/polkit-1/rules.d May 27 17:41:50.822666 polkitd[1666]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 27 17:41:50.822725 polkitd[1666]: Loading rules from directory /usr/share/polkit-1/rules.d May 27 17:41:50.824597 polkitd[1666]: Finished loading, compiling and executing 2 rules May 27 17:41:50.825075 systemd[1]: Started polkit.service - Authorization Manager. May 27 17:41:50.826043 dbus-daemon[1516]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 27 17:41:50.827191 polkitd[1666]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 27 17:41:50.868445 systemd-hostnamed[1617]: Hostname set to (transient) May 27 17:41:50.871870 systemd-resolved[1384]: System hostname changed to 'ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal'. May 27 17:41:51.042305 tar[1559]: linux-amd64/README.md May 27 17:41:51.056393 sshd[1670]: Accepted publickey for core from 139.178.68.195 port 41060 ssh2: RSA SHA256:7uv6hSrfUOFE2GLtdKQWjXT5BnA3UXN539X7K8JeYEc May 27 17:41:51.061336 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:41:51.072354 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 17:41:51.090926 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 17:41:51.105313 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 17:41:51.139069 systemd-logind[1546]: New session 1 of user core. May 27 17:41:51.154496 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 17:41:51.171926 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 17:41:51.177452 instance-setup[1635]: INFO Running google_set_multiqueue. May 27 17:41:51.207355 instance-setup[1635]: INFO Set channels for eth0 to 2. May 27 17:41:51.209211 (systemd)[1703]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 17:41:51.215478 instance-setup[1635]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. May 27 17:41:51.216511 systemd-logind[1546]: New session c1 of user core. May 27 17:41:51.219059 instance-setup[1635]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 May 27 17:41:51.219140 instance-setup[1635]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. May 27 17:41:51.222146 instance-setup[1635]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 May 27 17:41:51.223546 instance-setup[1635]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. May 27 17:41:51.227314 instance-setup[1635]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 May 27 17:41:51.227530 instance-setup[1635]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. May 27 17:41:51.228855 instance-setup[1635]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 May 27 17:41:51.240538 instance-setup[1635]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type May 27 17:41:51.245714 instance-setup[1635]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type May 27 17:41:51.247708 instance-setup[1635]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus May 27 17:41:51.247794 instance-setup[1635]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus May 27 17:41:51.280507 init.sh[1628]: + /usr/bin/google_metadata_script_runner --script-type startup May 27 17:41:51.472818 startup-script[1732]: INFO Starting startup scripts. May 27 17:41:51.479765 startup-script[1732]: INFO No startup scripts found in metadata. May 27 17:41:51.479861 startup-script[1732]: INFO Finished running startup scripts. May 27 17:41:51.505902 init.sh[1628]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM May 27 17:41:51.506092 init.sh[1628]: + daemon_pids=() May 27 17:41:51.506239 init.sh[1628]: + for d in accounts clock_skew network May 27 17:41:51.506625 init.sh[1628]: + daemon_pids+=($!) May 27 17:41:51.506935 init.sh[1735]: + /usr/bin/google_accounts_daemon May 27 17:41:51.508186 init.sh[1628]: + for d in accounts clock_skew network May 27 17:41:51.508186 init.sh[1628]: + daemon_pids+=($!) May 27 17:41:51.508186 init.sh[1628]: + for d in accounts clock_skew network May 27 17:41:51.508186 init.sh[1628]: + daemon_pids+=($!) May 27 17:41:51.508186 init.sh[1628]: + NOTIFY_SOCKET=/run/systemd/notify May 27 17:41:51.508186 init.sh[1628]: + /usr/bin/systemd-notify --ready May 27 17:41:51.509199 init.sh[1736]: + /usr/bin/google_clock_skew_daemon May 27 17:41:51.511362 init.sh[1737]: + /usr/bin/google_network_daemon May 27 17:41:51.535493 systemd[1]: Started oem-gce.service - GCE Linux Agent. May 27 17:41:51.552460 init.sh[1628]: + wait -n 1735 1736 1737 May 27 17:41:51.628134 systemd[1703]: Queued start job for default target default.target. May 27 17:41:51.633946 systemd[1703]: Created slice app.slice - User Application Slice. May 27 17:41:51.634824 systemd[1703]: Reached target paths.target - Paths. May 27 17:41:51.635040 systemd[1703]: Reached target timers.target - Timers. May 27 17:41:51.638418 systemd[1703]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 17:41:51.684096 systemd[1703]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 17:41:51.684206 systemd[1703]: Reached target sockets.target - Sockets. May 27 17:41:51.684330 systemd[1703]: Reached target basic.target - Basic System. May 27 17:41:51.684411 systemd[1703]: Reached target default.target - Main User Target. May 27 17:41:51.684465 systemd[1703]: Startup finished in 451ms. May 27 17:41:51.684901 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 17:41:51.701539 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 17:41:51.987956 systemd[1]: Started sshd@1-10.128.0.9:22-139.178.68.195:41066.service - OpenSSH per-connection server daemon (139.178.68.195:41066). May 27 17:41:52.044246 google-clock-skew[1736]: INFO Starting Google Clock Skew daemon. May 27 17:41:52.047785 google-networking[1737]: INFO Starting Google Networking daemon. May 27 17:41:52.061900 google-clock-skew[1736]: INFO Clock drift token has changed: 0. May 27 17:41:52.094061 groupadd[1751]: group added to /etc/group: name=google-sudoers, GID=1000 May 27 17:41:52.098640 groupadd[1751]: group added to /etc/gshadow: name=google-sudoers May 27 17:41:52.144988 groupadd[1751]: new group: name=google-sudoers, GID=1000 May 27 17:41:52.178763 google-accounts[1735]: INFO Starting Google Accounts daemon. May 27 17:41:52.185148 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:41:52.193493 google-accounts[1735]: WARNING OS Login not installed. May 27 17:41:52.195409 google-accounts[1735]: INFO Creating a new user account for 0. May 27 17:41:52.196486 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 17:41:52.201865 init.sh[1768]: useradd: invalid user name '0': use --badname to ignore May 27 17:41:52.202045 google-accounts[1735]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. May 27 17:41:52.206512 systemd[1]: Startup finished in 3.680s (kernel) + 29.775s (initrd) + 9.650s (userspace) = 43.105s. May 27 17:41:52.210247 (kubelet)[1766]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:41:52.240351 ntpd[1537]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:9%2]:123 May 27 17:41:52.240730 ntpd[1537]: 27 May 17:41:52 ntpd[1537]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:9%2]:123 May 27 17:41:52.394041 sshd[1750]: Accepted publickey for core from 139.178.68.195 port 41066 ssh2: RSA SHA256:7uv6hSrfUOFE2GLtdKQWjXT5BnA3UXN539X7K8JeYEc May 27 17:41:52.396179 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:41:52.406791 systemd-logind[1546]: New session 2 of user core. May 27 17:41:52.412518 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 17:41:52.647888 sshd[1778]: Connection closed by 139.178.68.195 port 41066 May 27 17:41:52.648526 sshd-session[1750]: pam_unix(sshd:session): session closed for user core May 27 17:41:52.655500 systemd-logind[1546]: Session 2 logged out. Waiting for processes to exit. May 27 17:41:52.656944 systemd[1]: sshd@1-10.128.0.9:22-139.178.68.195:41066.service: Deactivated successfully. May 27 17:41:52.659909 systemd[1]: session-2.scope: Deactivated successfully. May 27 17:41:52.663130 systemd-logind[1546]: Removed session 2. May 27 17:41:52.710612 systemd[1]: Started sshd@2-10.128.0.9:22-139.178.68.195:41074.service - OpenSSH per-connection server daemon (139.178.68.195:41074). May 27 17:41:53.000922 systemd-resolved[1384]: Clock change detected. Flushing caches. May 27 17:41:53.002378 google-clock-skew[1736]: INFO Synced system time with hardware clock. May 27 17:41:53.209119 kubelet[1766]: E0527 17:41:53.209043 1766 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:41:53.212685 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:41:53.212927 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:41:53.213492 systemd[1]: kubelet.service: Consumed 1.307s CPU time, 265.6M memory peak. May 27 17:41:53.227533 sshd[1785]: Accepted publickey for core from 139.178.68.195 port 41074 ssh2: RSA SHA256:7uv6hSrfUOFE2GLtdKQWjXT5BnA3UXN539X7K8JeYEc May 27 17:41:53.229222 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:41:53.236929 systemd-logind[1546]: New session 3 of user core. May 27 17:41:53.239484 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 17:41:53.476546 sshd[1789]: Connection closed by 139.178.68.195 port 41074 May 27 17:41:53.477870 sshd-session[1785]: pam_unix(sshd:session): session closed for user core May 27 17:41:53.483476 systemd[1]: sshd@2-10.128.0.9:22-139.178.68.195:41074.service: Deactivated successfully. May 27 17:41:53.485868 systemd[1]: session-3.scope: Deactivated successfully. May 27 17:41:53.487002 systemd-logind[1546]: Session 3 logged out. Waiting for processes to exit. May 27 17:41:53.489375 systemd-logind[1546]: Removed session 3. May 27 17:41:53.539627 systemd[1]: Started sshd@3-10.128.0.9:22-139.178.68.195:41082.service - OpenSSH per-connection server daemon (139.178.68.195:41082). May 27 17:41:53.919173 sshd[1795]: Accepted publickey for core from 139.178.68.195 port 41082 ssh2: RSA SHA256:7uv6hSrfUOFE2GLtdKQWjXT5BnA3UXN539X7K8JeYEc May 27 17:41:53.920920 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:41:53.928348 systemd-logind[1546]: New session 4 of user core. May 27 17:41:53.937548 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 17:41:54.170376 sshd[1797]: Connection closed by 139.178.68.195 port 41082 May 27 17:41:54.171484 sshd-session[1795]: pam_unix(sshd:session): session closed for user core May 27 17:41:54.176992 systemd[1]: sshd@3-10.128.0.9:22-139.178.68.195:41082.service: Deactivated successfully. May 27 17:41:54.179565 systemd[1]: session-4.scope: Deactivated successfully. May 27 17:41:54.180874 systemd-logind[1546]: Session 4 logged out. Waiting for processes to exit. May 27 17:41:54.182780 systemd-logind[1546]: Removed session 4. May 27 17:41:54.233750 systemd[1]: Started sshd@4-10.128.0.9:22-139.178.68.195:35454.service - OpenSSH per-connection server daemon (139.178.68.195:35454). May 27 17:41:54.608012 sshd[1804]: Accepted publickey for core from 139.178.68.195 port 35454 ssh2: RSA SHA256:7uv6hSrfUOFE2GLtdKQWjXT5BnA3UXN539X7K8JeYEc May 27 17:41:54.609776 sshd-session[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:41:54.617325 systemd-logind[1546]: New session 5 of user core. May 27 17:41:54.623495 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 17:41:54.829199 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 17:41:54.829703 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:41:54.848192 sudo[1807]: pam_unix(sudo:session): session closed for user root May 27 17:41:54.900641 sshd[1806]: Connection closed by 139.178.68.195 port 35454 May 27 17:41:54.902265 sshd-session[1804]: pam_unix(sshd:session): session closed for user core May 27 17:41:54.908151 systemd[1]: sshd@4-10.128.0.9:22-139.178.68.195:35454.service: Deactivated successfully. May 27 17:41:54.910796 systemd[1]: session-5.scope: Deactivated successfully. May 27 17:41:54.912074 systemd-logind[1546]: Session 5 logged out. Waiting for processes to exit. May 27 17:41:54.914994 systemd-logind[1546]: Removed session 5. May 27 17:41:54.972140 systemd[1]: Started sshd@5-10.128.0.9:22-139.178.68.195:35468.service - OpenSSH per-connection server daemon (139.178.68.195:35468). May 27 17:41:55.335011 sshd[1813]: Accepted publickey for core from 139.178.68.195 port 35468 ssh2: RSA SHA256:7uv6hSrfUOFE2GLtdKQWjXT5BnA3UXN539X7K8JeYEc May 27 17:41:55.336844 sshd-session[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:41:55.344514 systemd-logind[1546]: New session 6 of user core. May 27 17:41:55.350501 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 17:41:55.543087 sudo[1817]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 17:41:55.543592 sudo[1817]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:41:55.550636 sudo[1817]: pam_unix(sudo:session): session closed for user root May 27 17:41:55.564376 sudo[1816]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 17:41:55.564843 sudo[1816]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:41:55.577611 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:41:55.631608 augenrules[1839]: No rules May 27 17:41:55.633224 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:41:55.633659 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:41:55.635410 sudo[1816]: pam_unix(sudo:session): session closed for user root May 27 17:41:55.688469 sshd[1815]: Connection closed by 139.178.68.195 port 35468 May 27 17:41:55.689342 sshd-session[1813]: pam_unix(sshd:session): session closed for user core May 27 17:41:55.694051 systemd[1]: sshd@5-10.128.0.9:22-139.178.68.195:35468.service: Deactivated successfully. May 27 17:41:55.696491 systemd[1]: session-6.scope: Deactivated successfully. May 27 17:41:55.699256 systemd-logind[1546]: Session 6 logged out. Waiting for processes to exit. May 27 17:41:55.700887 systemd-logind[1546]: Removed session 6. May 27 17:41:55.750456 systemd[1]: Started sshd@6-10.128.0.9:22-139.178.68.195:35480.service - OpenSSH per-connection server daemon (139.178.68.195:35480). May 27 17:41:56.101607 sshd[1848]: Accepted publickey for core from 139.178.68.195 port 35480 ssh2: RSA SHA256:7uv6hSrfUOFE2GLtdKQWjXT5BnA3UXN539X7K8JeYEc May 27 17:41:56.103594 sshd-session[1848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:41:56.111033 systemd-logind[1546]: New session 7 of user core. May 27 17:41:56.114486 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 17:41:56.307263 sudo[1851]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 17:41:56.307782 sudo[1851]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:41:56.797144 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 17:41:56.819904 (dockerd)[1869]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 17:41:57.134416 dockerd[1869]: time="2025-05-27T17:41:57.134342055Z" level=info msg="Starting up" May 27 17:41:57.136418 dockerd[1869]: time="2025-05-27T17:41:57.136379667Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 17:41:57.273712 dockerd[1869]: time="2025-05-27T17:41:57.273654828Z" level=info msg="Loading containers: start." May 27 17:41:57.293453 kernel: Initializing XFRM netlink socket May 27 17:41:57.622725 systemd-networkd[1436]: docker0: Link UP May 27 17:41:57.629001 dockerd[1869]: time="2025-05-27T17:41:57.628947586Z" level=info msg="Loading containers: done." May 27 17:41:57.650564 dockerd[1869]: time="2025-05-27T17:41:57.650507003Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 17:41:57.650789 dockerd[1869]: time="2025-05-27T17:41:57.650646506Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 17:41:57.650852 dockerd[1869]: time="2025-05-27T17:41:57.650794726Z" level=info msg="Initializing buildkit" May 27 17:41:57.651959 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck18306168-merged.mount: Deactivated successfully. May 27 17:41:57.683208 dockerd[1869]: time="2025-05-27T17:41:57.683136686Z" level=info msg="Completed buildkit initialization" May 27 17:41:57.692863 dockerd[1869]: time="2025-05-27T17:41:57.692786615Z" level=info msg="Daemon has completed initialization" May 27 17:41:57.693088 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 17:41:57.693485 dockerd[1869]: time="2025-05-27T17:41:57.693072455Z" level=info msg="API listen on /run/docker.sock" May 27 17:41:58.577774 containerd[1563]: time="2025-05-27T17:41:58.577706589Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 27 17:41:59.029450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount63390481.mount: Deactivated successfully. May 27 17:42:00.511409 containerd[1563]: time="2025-05-27T17:42:00.511343194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:42:00.512828 containerd[1563]: time="2025-05-27T17:42:00.512763730Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=28804439" May 27 17:42:00.514189 containerd[1563]: time="2025-05-27T17:42:00.514117901Z" level=info msg="ImageCreate event name:\"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:42:00.517440 containerd[1563]: time="2025-05-27T17:42:00.517378098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:42:00.519028 containerd[1563]: time="2025-05-27T17:42:00.518789996Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"28794611\" in 1.941026958s" May 27 17:42:00.519028 containerd[1563]: time="2025-05-27T17:42:00.518840721Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 27 17:42:00.519981 containerd[1563]: time="2025-05-27T17:42:00.519933094Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 27 17:42:01.992788 containerd[1563]: time="2025-05-27T17:42:01.992716602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:42:01.994262 containerd[1563]: time="2025-05-27T17:42:01.994218825Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=24784457" May 27 17:42:01.995461 containerd[1563]: time="2025-05-27T17:42:01.995384742Z" level=info msg="ImageCreate event name:\"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:42:02.000356 containerd[1563]: time="2025-05-27T17:42:02.000314420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:42:02.001762 containerd[1563]: time="2025-05-27T17:42:02.001603962Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"26384363\" in 1.481608873s" May 27 17:42:02.001762 containerd[1563]: time="2025-05-27T17:42:02.001648921Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 27 17:42:02.002456 containerd[1563]: time="2025-05-27T17:42:02.002381347Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 27 17:42:03.273331 containerd[1563]: time="2025-05-27T17:42:03.273254854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:42:03.274872 containerd[1563]: time="2025-05-27T17:42:03.274809933Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=19177979" May 27 17:42:03.276228 containerd[1563]: time="2025-05-27T17:42:03.276157470Z" level=info msg="ImageCreate event name:\"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:42:03.279636 containerd[1563]: time="2025-05-27T17:42:03.279572863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:42:03.281312 containerd[1563]: time="2025-05-27T17:42:03.280883128Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"20777921\" in 1.278451015s" May 27 17:42:03.281312 containerd[1563]: time="2025-05-27T17:42:03.280927971Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 27 17:42:03.281791 containerd[1563]: time="2025-05-27T17:42:03.281737733Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 27 17:42:03.334345 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 17:42:03.337087 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:42:03.695392 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:42:03.707043 (kubelet)[2140]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:42:03.762559 kubelet[2140]: E0527 17:42:03.762484 2140 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:42:03.766887 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:42:03.767146 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:42:03.767732 systemd[1]: kubelet.service: Consumed 218ms CPU time, 110M memory peak. May 27 17:42:04.841988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1209606832.mount: Deactivated successfully. May 27 17:42:05.493372 containerd[1563]: time="2025-05-27T17:42:05.493299629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:42:05.494633 containerd[1563]: time="2025-05-27T17:42:05.494579069Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=30894767" May 27 17:42:05.495985 containerd[1563]: time="2025-05-27T17:42:05.495909158Z" level=info msg="ImageCreate event name:\"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:42:05.498716 containerd[1563]: time="2025-05-27T17:42:05.498652114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:42:05.499655 containerd[1563]: time="2025-05-27T17:42:05.499475013Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"30891891\" in 2.217688577s" May 27 17:42:05.499655 containerd[1563]: time="2025-05-27T17:42:05.499520393Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 27 17:42:05.500400 containerd[1563]: time="2025-05-27T17:42:05.500369457Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 27 17:42:05.877964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2504033022.mount: Deactivated successfully. May 27 17:42:06.976619 containerd[1563]: time="2025-05-27T17:42:06.976543823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:42:06.978138 containerd[1563]: time="2025-05-27T17:42:06.978093702Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18571883" May 27 17:42:06.979188 containerd[1563]: time="2025-05-27T17:42:06.979102907Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:42:06.982597 containerd[1563]: time="2025-05-27T17:42:06.982520160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:42:06.984093 containerd[1563]: time="2025-05-27T17:42:06.983933292Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.483523411s" May 27 17:42:06.984093 containerd[1563]: time="2025-05-27T17:42:06.983978452Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 27 17:42:06.985088 containerd[1563]: time="2025-05-27T17:42:06.984813409Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 17:42:07.379067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3948306392.mount: Deactivated successfully. May 27 17:42:07.384925 containerd[1563]: time="2025-05-27T17:42:07.384860621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:42:07.385895 containerd[1563]: time="2025-05-27T17:42:07.385833242Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" May 27 17:42:07.387125 containerd[1563]: time="2025-05-27T17:42:07.387060092Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:42:07.389757 containerd[1563]: time="2025-05-27T17:42:07.389691816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:42:07.391234 containerd[1563]: time="2025-05-27T17:42:07.390717702Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 405.866283ms" May 27 17:42:07.391234 containerd[1563]: time="2025-05-27T17:42:07.390760915Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 27 17:42:07.391946 containerd[1563]: time="2025-05-27T17:42:07.391622541Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 27 17:42:07.805201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount967945494.mount: Deactivated successfully. May 27 17:42:09.964758 containerd[1563]: time="2025-05-27T17:42:09.964682285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:42:09.966292 containerd[1563]: time="2025-05-27T17:42:09.966216026Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57557924" May 27 17:42:09.967755 containerd[1563]: time="2025-05-27T17:42:09.967716672Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:42:09.971180 containerd[1563]: time="2025-05-27T17:42:09.971116387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:42:09.972708 containerd[1563]: time="2025-05-27T17:42:09.972530004Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.580867512s" May 27 17:42:09.972708 containerd[1563]: time="2025-05-27T17:42:09.972576548Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 27 17:42:13.834292 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 27 17:42:13.839401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:42:14.220493 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:42:14.233859 (kubelet)[2296]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:42:14.304567 kubelet[2296]: E0527 17:42:14.304479 2296 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:42:14.311116 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:42:14.311602 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:42:14.312200 systemd[1]: kubelet.service: Consumed 249ms CPU time, 108.3M memory peak. May 27 17:42:14.804120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:42:14.804520 systemd[1]: kubelet.service: Consumed 249ms CPU time, 108.3M memory peak. May 27 17:42:14.807749 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:42:14.858987 systemd[1]: Reload requested from client PID 2311 ('systemctl') (unit session-7.scope)... May 27 17:42:14.859014 systemd[1]: Reloading... May 27 17:42:15.023350 zram_generator::config[2352]: No configuration found. May 27 17:42:15.162196 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:42:15.327900 systemd[1]: Reloading finished in 467 ms. May 27 17:42:15.416074 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 17:42:15.416217 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 17:42:15.416627 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:42:15.416698 systemd[1]: kubelet.service: Consumed 163ms CPU time, 98.3M memory peak. May 27 17:42:15.420136 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:42:15.710153 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:42:15.722947 (kubelet)[2408]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 17:42:15.778710 kubelet[2408]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:42:15.778710 kubelet[2408]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 17:42:15.778710 kubelet[2408]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:42:15.779333 kubelet[2408]: I0527 17:42:15.778784 2408 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 17:42:16.741166 kubelet[2408]: I0527 17:42:16.741120 2408 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 17:42:16.742546 kubelet[2408]: I0527 17:42:16.741386 2408 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 17:42:16.742546 kubelet[2408]: I0527 17:42:16.742100 2408 server.go:954] "Client rotation is on, will bootstrap in background" May 27 17:42:16.781490 kubelet[2408]: E0527 17:42:16.781437 2408 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.9:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.9:6443: connect: connection refused" logger="UnhandledError" May 27 17:42:16.783028 kubelet[2408]: I0527 17:42:16.782993 2408 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:42:16.792772 kubelet[2408]: I0527 17:42:16.792733 2408 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 17:42:16.796882 kubelet[2408]: I0527 17:42:16.796857 2408 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 17:42:16.799628 kubelet[2408]: I0527 17:42:16.799557 2408 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 17:42:16.799889 kubelet[2408]: I0527 17:42:16.799613 2408 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 17:42:16.800076 kubelet[2408]: I0527 17:42:16.799891 2408 topology_manager.go:138] "Creating topology manager with none policy" May 27 17:42:16.800076 kubelet[2408]: I0527 17:42:16.799912 2408 container_manager_linux.go:304] "Creating device plugin manager" May 27 17:42:16.800173 kubelet[2408]: I0527 17:42:16.800085 2408 state_mem.go:36] "Initialized new in-memory state store" May 27 17:42:16.806395 kubelet[2408]: I0527 17:42:16.806356 2408 kubelet.go:446] "Attempting to sync node with API server" May 27 17:42:16.809291 kubelet[2408]: I0527 17:42:16.809241 2408 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 17:42:16.809834 kubelet[2408]: I0527 17:42:16.809469 2408 kubelet.go:352] "Adding apiserver pod source" May 27 17:42:16.809834 kubelet[2408]: I0527 17:42:16.809494 2408 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 17:42:16.811695 kubelet[2408]: W0527 17:42:16.811614 2408 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.9:6443: connect: connection refused May 27 17:42:16.811796 kubelet[2408]: E0527 17:42:16.811693 2408 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.9:6443: connect: connection refused" logger="UnhandledError" May 27 17:42:16.814117 kubelet[2408]: I0527 17:42:16.814095 2408 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 17:42:16.814870 kubelet[2408]: I0527 17:42:16.814847 2408 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 17:42:16.816097 kubelet[2408]: W0527 17:42:16.816071 2408 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 17:42:16.820242 kubelet[2408]: I0527 17:42:16.819743 2408 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 17:42:16.820242 kubelet[2408]: I0527 17:42:16.819804 2408 server.go:1287] "Started kubelet" May 27 17:42:16.820242 kubelet[2408]: W0527 17:42:16.819991 2408 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.9:6443: connect: connection refused May 27 17:42:16.820242 kubelet[2408]: E0527 17:42:16.820073 2408 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.9:6443: connect: connection refused" logger="UnhandledError" May 27 17:42:16.837100 kubelet[2408]: I0527 17:42:16.836495 2408 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 17:42:16.837100 kubelet[2408]: E0527 17:42:16.834506 2408 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.9:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.9:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal.1843733181e57130 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal,UID:ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal,},FirstTimestamp:2025-05-27 17:42:16.819773744 +0000 UTC m=+1.091132616,LastTimestamp:2025-05-27 17:42:16.819773744 +0000 UTC m=+1.091132616,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal,}" May 27 17:42:16.837825 kubelet[2408]: I0527 17:42:16.837775 2408 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 17:42:16.847420 kubelet[2408]: I0527 17:42:16.847392 2408 server.go:479] "Adding debug handlers to kubelet server" May 27 17:42:16.848749 kubelet[2408]: I0527 17:42:16.838707 2408 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 17:42:16.848861 kubelet[2408]: I0527 17:42:16.838010 2408 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 17:42:16.849120 kubelet[2408]: I0527 17:42:16.849091 2408 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 17:42:16.849210 kubelet[2408]: E0527 17:42:16.841054 2408 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" not found" May 27 17:42:16.849210 kubelet[2408]: W0527 17:42:16.844064 2408 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.9:6443: connect: connection refused May 27 17:42:16.849356 kubelet[2408]: E0527 17:42:16.849221 2408 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.9:6443: connect: connection refused" logger="UnhandledError" May 27 17:42:16.849356 kubelet[2408]: E0527 17:42:16.845720 2408 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 17:42:16.849356 kubelet[2408]: I0527 17:42:16.840847 2408 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 17:42:16.849692 kubelet[2408]: I0527 17:42:16.840863 2408 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 17:42:16.849692 kubelet[2408]: E0527 17:42:16.844176 2408 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.9:6443: connect: connection refused" interval="200ms" May 27 17:42:16.849803 kubelet[2408]: I0527 17:42:16.845138 2408 factory.go:221] Registration of the systemd container factory successfully May 27 17:42:16.850366 kubelet[2408]: I0527 17:42:16.849805 2408 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 17:42:16.850366 kubelet[2408]: I0527 17:42:16.850142 2408 reconciler.go:26] "Reconciler: start to sync state" May 27 17:42:16.852591 kubelet[2408]: I0527 17:42:16.852554 2408 factory.go:221] Registration of the containerd container factory successfully May 27 17:42:16.869974 kubelet[2408]: I0527 17:42:16.868416 2408 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 17:42:16.870391 kubelet[2408]: I0527 17:42:16.870359 2408 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 17:42:16.870391 kubelet[2408]: I0527 17:42:16.870395 2408 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 17:42:16.870564 kubelet[2408]: I0527 17:42:16.870424 2408 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 17:42:16.870564 kubelet[2408]: I0527 17:42:16.870436 2408 kubelet.go:2382] "Starting kubelet main sync loop" May 27 17:42:16.870564 kubelet[2408]: E0527 17:42:16.870508 2408 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 17:42:16.878199 kubelet[2408]: W0527 17:42:16.878132 2408 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.9:6443: connect: connection refused May 27 17:42:16.878362 kubelet[2408]: E0527 17:42:16.878217 2408 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.9:6443: connect: connection refused" logger="UnhandledError" May 27 17:42:16.893700 kubelet[2408]: I0527 17:42:16.893660 2408 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 17:42:16.893700 kubelet[2408]: I0527 17:42:16.893690 2408 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 17:42:16.893919 kubelet[2408]: I0527 17:42:16.893717 2408 state_mem.go:36] "Initialized new in-memory state store" May 27 17:42:16.896270 kubelet[2408]: I0527 17:42:16.896225 2408 policy_none.go:49] "None policy: Start" May 27 17:42:16.896270 kubelet[2408]: I0527 17:42:16.896254 2408 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 17:42:16.896452 kubelet[2408]: I0527 17:42:16.896271 2408 state_mem.go:35] "Initializing new in-memory state store" May 27 17:42:16.904744 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 17:42:16.920670 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 17:42:16.926002 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 17:42:16.939898 kubelet[2408]: I0527 17:42:16.939430 2408 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 17:42:16.940060 kubelet[2408]: I0527 17:42:16.940033 2408 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 17:42:16.940122 kubelet[2408]: I0527 17:42:16.940063 2408 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 17:42:16.940484 kubelet[2408]: I0527 17:42:16.940458 2408 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 17:42:16.943002 kubelet[2408]: E0527 17:42:16.942979 2408 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 17:42:16.943233 kubelet[2408]: E0527 17:42:16.943201 2408 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" not found" May 27 17:42:16.992859 systemd[1]: Created slice kubepods-burstable-pod90dcfa5818e014744ee381bc8e6a61a8.slice - libcontainer container kubepods-burstable-pod90dcfa5818e014744ee381bc8e6a61a8.slice. May 27 17:42:16.999009 kubelet[2408]: W0527 17:42:16.998912 2408 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90dcfa5818e014744ee381bc8e6a61a8.slice/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90dcfa5818e014744ee381bc8e6a61a8.slice/cpuset.cpus.effective: no such device May 27 17:42:17.005402 kubelet[2408]: E0527 17:42:17.005265 2408 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" not found" node="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:17.012262 systemd[1]: Created slice kubepods-burstable-pode1b42a64dd7bdf2994a9f4657e3c85ef.slice - libcontainer container kubepods-burstable-pode1b42a64dd7bdf2994a9f4657e3c85ef.slice. May 27 17:42:17.024343 kubelet[2408]: E0527 17:42:17.024045 2408 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" not found" node="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:17.028508 systemd[1]: Created slice kubepods-burstable-pod806befc10d530544be7c50c02f4a2f89.slice - libcontainer container kubepods-burstable-pod806befc10d530544be7c50c02f4a2f89.slice. May 27 17:42:17.031296 kubelet[2408]: E0527 17:42:17.031240 2408 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" not found" node="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:17.046174 kubelet[2408]: I0527 17:42:17.046138 2408 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:17.047426 kubelet[2408]: E0527 17:42:17.046654 2408 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.9:6443/api/v1/nodes\": dial tcp 10.128.0.9:6443: connect: connection refused" node="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:17.050751 kubelet[2408]: E0527 17:42:17.050699 2408 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.9:6443: connect: connection refused" interval="400ms" May 27 17:42:17.051833 kubelet[2408]: I0527 17:42:17.051790 2408 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/806befc10d530544be7c50c02f4a2f89-kubeconfig\") pod \"kube-scheduler-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" (UID: \"806befc10d530544be7c50c02f4a2f89\") " pod="kube-system/kube-scheduler-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:17.051833 kubelet[2408]: I0527 17:42:17.051840 2408 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/90dcfa5818e014744ee381bc8e6a61a8-k8s-certs\") pod \"kube-apiserver-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" (UID: \"90dcfa5818e014744ee381bc8e6a61a8\") " pod="kube-system/kube-apiserver-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:17.051987 kubelet[2408]: I0527 17:42:17.051873 2408 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e1b42a64dd7bdf2994a9f4657e3c85ef-flexvolume-dir\") pod \"kube-controller-manager-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" (UID: \"e1b42a64dd7bdf2994a9f4657e3c85ef\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:17.051987 kubelet[2408]: I0527 17:42:17.051903 2408 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e1b42a64dd7bdf2994a9f4657e3c85ef-k8s-certs\") pod \"kube-controller-manager-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" (UID: \"e1b42a64dd7bdf2994a9f4657e3c85ef\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:17.051987 kubelet[2408]: I0527 17:42:17.051934 2408 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e1b42a64dd7bdf2994a9f4657e3c85ef-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" (UID: \"e1b42a64dd7bdf2994a9f4657e3c85ef\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:17.051987 kubelet[2408]: I0527 17:42:17.051963 2408 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/90dcfa5818e014744ee381bc8e6a61a8-ca-certs\") pod \"kube-apiserver-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" (UID: \"90dcfa5818e014744ee381bc8e6a61a8\") " pod="kube-system/kube-apiserver-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:17.052224 kubelet[2408]: I0527 17:42:17.051992 2408 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/90dcfa5818e014744ee381bc8e6a61a8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" (UID: \"90dcfa5818e014744ee381bc8e6a61a8\") " pod="kube-system/kube-apiserver-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:17.052224 kubelet[2408]: I0527 17:42:17.052021 2408 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e1b42a64dd7bdf2994a9f4657e3c85ef-ca-certs\") pod \"kube-controller-manager-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" (UID: \"e1b42a64dd7bdf2994a9f4657e3c85ef\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:17.052224 kubelet[2408]: I0527 17:42:17.052050 2408 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e1b42a64dd7bdf2994a9f4657e3c85ef-kubeconfig\") pod \"kube-controller-manager-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" (UID: \"e1b42a64dd7bdf2994a9f4657e3c85ef\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:17.255402 kubelet[2408]: I0527 17:42:17.254064 2408 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:17.255402 kubelet[2408]: E0527 17:42:17.254878 2408 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.9:6443/api/v1/nodes\": dial tcp 10.128.0.9:6443: connect: connection refused" node="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:17.307700 containerd[1563]: time="2025-05-27T17:42:17.307607135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal,Uid:90dcfa5818e014744ee381bc8e6a61a8,Namespace:kube-system,Attempt:0,}" May 27 17:42:17.327851 containerd[1563]: time="2025-05-27T17:42:17.327548172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal,Uid:e1b42a64dd7bdf2994a9f4657e3c85ef,Namespace:kube-system,Attempt:0,}" May 27 17:42:17.334039 containerd[1563]: time="2025-05-27T17:42:17.334002101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal,Uid:806befc10d530544be7c50c02f4a2f89,Namespace:kube-system,Attempt:0,}" May 27 17:42:17.343708 containerd[1563]: time="2025-05-27T17:42:17.343644799Z" level=info msg="connecting to shim 11730f7ed105de45b5e2e235f989acc31a10a00242076f5284f57686570cda8d" address="unix:///run/containerd/s/66032d75089ae8570694ef3055afbb8dc099b404233a0a6774b0dcc32268495f" namespace=k8s.io protocol=ttrpc version=3 May 27 17:42:17.396093 containerd[1563]: time="2025-05-27T17:42:17.396030237Z" level=info msg="connecting to shim 2ba3db7da6e37af335d32a92ee86a3e9c0c3aea26d7bae2841912ce7920e4e6a" address="unix:///run/containerd/s/86cd937c1dafc05b09ce95d6b4d7a464560c01bf937912d9e114f18883967b47" namespace=k8s.io protocol=ttrpc version=3 May 27 17:42:17.398772 containerd[1563]: time="2025-05-27T17:42:17.398729345Z" level=info msg="connecting to shim cddb02b4e958be014c9b2939fd791d63159c30fd545093f72ccde454fb88b627" address="unix:///run/containerd/s/e03fce8f6d257197cfc78c416919d3e89b22319ef7280f46ccdb68f94b492535" namespace=k8s.io protocol=ttrpc version=3 May 27 17:42:17.442529 systemd[1]: Started cri-containerd-11730f7ed105de45b5e2e235f989acc31a10a00242076f5284f57686570cda8d.scope - libcontainer container 11730f7ed105de45b5e2e235f989acc31a10a00242076f5284f57686570cda8d. May 27 17:42:17.453177 kubelet[2408]: E0527 17:42:17.453119 2408 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.9:6443: connect: connection refused" interval="800ms" May 27 17:42:17.466503 systemd[1]: Started cri-containerd-2ba3db7da6e37af335d32a92ee86a3e9c0c3aea26d7bae2841912ce7920e4e6a.scope - libcontainer container 2ba3db7da6e37af335d32a92ee86a3e9c0c3aea26d7bae2841912ce7920e4e6a. May 27 17:42:17.484525 systemd[1]: Started cri-containerd-cddb02b4e958be014c9b2939fd791d63159c30fd545093f72ccde454fb88b627.scope - libcontainer container cddb02b4e958be014c9b2939fd791d63159c30fd545093f72ccde454fb88b627. May 27 17:42:17.570077 containerd[1563]: time="2025-05-27T17:42:17.569453681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal,Uid:90dcfa5818e014744ee381bc8e6a61a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"11730f7ed105de45b5e2e235f989acc31a10a00242076f5284f57686570cda8d\"" May 27 17:42:17.574764 kubelet[2408]: E0527 17:42:17.574330 2408 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-21291" May 27 17:42:17.579363 containerd[1563]: time="2025-05-27T17:42:17.579263818Z" level=info msg="CreateContainer within sandbox \"11730f7ed105de45b5e2e235f989acc31a10a00242076f5284f57686570cda8d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 17:42:17.610571 containerd[1563]: time="2025-05-27T17:42:17.610509404Z" level=info msg="Container d561642c7b73b095599e4ff8fca062c1982d889bde30fd97a1a94f8ea000eaf4: CDI devices from CRI Config.CDIDevices: []" May 27 17:42:17.615301 containerd[1563]: time="2025-05-27T17:42:17.615187620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal,Uid:806befc10d530544be7c50c02f4a2f89,Namespace:kube-system,Attempt:0,} returns sandbox id \"cddb02b4e958be014c9b2939fd791d63159c30fd545093f72ccde454fb88b627\"" May 27 17:42:17.619900 kubelet[2408]: E0527 17:42:17.619734 2408 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-21291" May 27 17:42:17.620351 containerd[1563]: time="2025-05-27T17:42:17.620311628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal,Uid:e1b42a64dd7bdf2994a9f4657e3c85ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ba3db7da6e37af335d32a92ee86a3e9c0c3aea26d7bae2841912ce7920e4e6a\"" May 27 17:42:17.621825 kubelet[2408]: E0527 17:42:17.621796 2408 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4344-0-0-0d28c620010305ab17c8.c.flat" May 27 17:42:17.622571 containerd[1563]: time="2025-05-27T17:42:17.622522583Z" level=info msg="CreateContainer within sandbox \"cddb02b4e958be014c9b2939fd791d63159c30fd545093f72ccde454fb88b627\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 17:42:17.626339 containerd[1563]: time="2025-05-27T17:42:17.626161894Z" level=info msg="CreateContainer within sandbox \"2ba3db7da6e37af335d32a92ee86a3e9c0c3aea26d7bae2841912ce7920e4e6a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 17:42:17.632026 containerd[1563]: time="2025-05-27T17:42:17.631963555Z" level=info msg="Container 664a45b2c6bd2e7709d803510154d678e752f3e449f0e2a46bb758b9f376a791: CDI devices from CRI Config.CDIDevices: []" May 27 17:42:17.639671 containerd[1563]: time="2025-05-27T17:42:17.639263498Z" level=info msg="CreateContainer within sandbox \"11730f7ed105de45b5e2e235f989acc31a10a00242076f5284f57686570cda8d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d561642c7b73b095599e4ff8fca062c1982d889bde30fd97a1a94f8ea000eaf4\"" May 27 17:42:17.640441 containerd[1563]: time="2025-05-27T17:42:17.640392221Z" level=info msg="StartContainer for \"d561642c7b73b095599e4ff8fca062c1982d889bde30fd97a1a94f8ea000eaf4\"" May 27 17:42:17.642648 containerd[1563]: time="2025-05-27T17:42:17.642598122Z" level=info msg="connecting to shim d561642c7b73b095599e4ff8fca062c1982d889bde30fd97a1a94f8ea000eaf4" address="unix:///run/containerd/s/66032d75089ae8570694ef3055afbb8dc099b404233a0a6774b0dcc32268495f" protocol=ttrpc version=3 May 27 17:42:17.649165 containerd[1563]: time="2025-05-27T17:42:17.649118682Z" level=info msg="CreateContainer within sandbox \"cddb02b4e958be014c9b2939fd791d63159c30fd545093f72ccde454fb88b627\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"664a45b2c6bd2e7709d803510154d678e752f3e449f0e2a46bb758b9f376a791\"" May 27 17:42:17.651295 containerd[1563]: time="2025-05-27T17:42:17.650823659Z" level=info msg="StartContainer for \"664a45b2c6bd2e7709d803510154d678e752f3e449f0e2a46bb758b9f376a791\"" May 27 17:42:17.653925 containerd[1563]: time="2025-05-27T17:42:17.653895031Z" level=info msg="Container 90bf8a9800052a420b41f9c69117b7c52c53f80c155c67481b3a7ca810eb41e4: CDI devices from CRI Config.CDIDevices: []" May 27 17:42:17.654671 containerd[1563]: time="2025-05-27T17:42:17.654639817Z" level=info msg="connecting to shim 664a45b2c6bd2e7709d803510154d678e752f3e449f0e2a46bb758b9f376a791" address="unix:///run/containerd/s/e03fce8f6d257197cfc78c416919d3e89b22319ef7280f46ccdb68f94b492535" protocol=ttrpc version=3 May 27 17:42:17.661629 kubelet[2408]: I0527 17:42:17.661591 2408 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:17.662163 kubelet[2408]: E0527 17:42:17.662123 2408 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.9:6443/api/v1/nodes\": dial tcp 10.128.0.9:6443: connect: connection refused" node="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:17.671315 containerd[1563]: time="2025-05-27T17:42:17.670899726Z" level=info msg="CreateContainer within sandbox \"2ba3db7da6e37af335d32a92ee86a3e9c0c3aea26d7bae2841912ce7920e4e6a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"90bf8a9800052a420b41f9c69117b7c52c53f80c155c67481b3a7ca810eb41e4\"" May 27 17:42:17.673083 containerd[1563]: time="2025-05-27T17:42:17.671926260Z" level=info msg="StartContainer for \"90bf8a9800052a420b41f9c69117b7c52c53f80c155c67481b3a7ca810eb41e4\"" May 27 17:42:17.680525 systemd[1]: Started cri-containerd-d561642c7b73b095599e4ff8fca062c1982d889bde30fd97a1a94f8ea000eaf4.scope - libcontainer container d561642c7b73b095599e4ff8fca062c1982d889bde30fd97a1a94f8ea000eaf4. May 27 17:42:17.684322 containerd[1563]: time="2025-05-27T17:42:17.684225320Z" level=info msg="connecting to shim 90bf8a9800052a420b41f9c69117b7c52c53f80c155c67481b3a7ca810eb41e4" address="unix:///run/containerd/s/86cd937c1dafc05b09ce95d6b4d7a464560c01bf937912d9e114f18883967b47" protocol=ttrpc version=3 May 27 17:42:17.697610 systemd[1]: Started cri-containerd-664a45b2c6bd2e7709d803510154d678e752f3e449f0e2a46bb758b9f376a791.scope - libcontainer container 664a45b2c6bd2e7709d803510154d678e752f3e449f0e2a46bb758b9f376a791. May 27 17:42:17.731757 kubelet[2408]: W0527 17:42:17.731619 2408 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.9:6443: connect: connection refused May 27 17:42:17.731757 kubelet[2408]: E0527 17:42:17.731718 2408 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.9:6443: connect: connection refused" logger="UnhandledError" May 27 17:42:17.733674 systemd[1]: Started cri-containerd-90bf8a9800052a420b41f9c69117b7c52c53f80c155c67481b3a7ca810eb41e4.scope - libcontainer container 90bf8a9800052a420b41f9c69117b7c52c53f80c155c67481b3a7ca810eb41e4. May 27 17:42:17.755332 kubelet[2408]: W0527 17:42:17.754951 2408 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.9:6443: connect: connection refused May 27 17:42:17.755772 kubelet[2408]: E0527 17:42:17.755704 2408 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.9:6443: connect: connection refused" logger="UnhandledError" May 27 17:42:17.812852 containerd[1563]: time="2025-05-27T17:42:17.812809117Z" level=info msg="StartContainer for \"d561642c7b73b095599e4ff8fca062c1982d889bde30fd97a1a94f8ea000eaf4\" returns successfully" May 27 17:42:17.871762 containerd[1563]: time="2025-05-27T17:42:17.871272442Z" level=info msg="StartContainer for \"664a45b2c6bd2e7709d803510154d678e752f3e449f0e2a46bb758b9f376a791\" returns successfully" May 27 17:42:17.895142 containerd[1563]: time="2025-05-27T17:42:17.894979946Z" level=info msg="StartContainer for \"90bf8a9800052a420b41f9c69117b7c52c53f80c155c67481b3a7ca810eb41e4\" returns successfully" May 27 17:42:17.909693 kubelet[2408]: E0527 17:42:17.909447 2408 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" not found" node="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:17.914047 kubelet[2408]: E0527 17:42:17.913931 2408 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" not found" node="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:17.940317 kubelet[2408]: W0527 17:42:17.939543 2408 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.9:6443: connect: connection refused May 27 17:42:17.940317 kubelet[2408]: E0527 17:42:17.939640 2408 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.9:6443: connect: connection refused" logger="UnhandledError" May 27 17:42:18.469543 kubelet[2408]: I0527 17:42:18.469501 2408 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:18.920992 kubelet[2408]: E0527 17:42:18.920946 2408 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" not found" node="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:18.921590 kubelet[2408]: E0527 17:42:18.921520 2408 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" not found" node="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:18.923374 kubelet[2408]: E0527 17:42:18.923334 2408 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" not found" node="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:19.921134 kubelet[2408]: E0527 17:42:19.919856 2408 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" not found" node="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:20.690701 kubelet[2408]: E0527 17:42:20.690461 2408 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" not found" node="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:20.716728 kubelet[2408]: E0527 17:42:20.716678 2408 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" not found" node="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:20.792761 kubelet[2408]: E0527 17:42:20.792626 2408 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal.1843733181e57130 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal,UID:ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal,},FirstTimestamp:2025-05-27 17:42:16.819773744 +0000 UTC m=+1.091132616,LastTimestamp:2025-05-27 17:42:16.819773744 +0000 UTC m=+1.091132616,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal,}" May 27 17:42:20.833446 kubelet[2408]: I0527 17:42:20.833381 2408 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:20.833446 kubelet[2408]: E0527 17:42:20.833454 2408 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\": node \"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" not found" May 27 17:42:20.833701 kubelet[2408]: I0527 17:42:20.833629 2408 apiserver.go:52] "Watching apiserver" May 27 17:42:20.842389 kubelet[2408]: I0527 17:42:20.842348 2408 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:20.850268 kubelet[2408]: I0527 17:42:20.850228 2408 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 17:42:20.900715 kubelet[2408]: E0527 17:42:20.900671 2408 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:20.900715 kubelet[2408]: I0527 17:42:20.900712 2408 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:20.911409 kubelet[2408]: E0527 17:42:20.911351 2408 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:20.911409 kubelet[2408]: I0527 17:42:20.911407 2408 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:20.927307 kubelet[2408]: E0527 17:42:20.926906 2408 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:21.049772 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 27 17:42:21.639414 kubelet[2408]: I0527 17:42:21.639130 2408 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:21.648465 kubelet[2408]: W0527 17:42:21.648415 2408 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] May 27 17:42:22.696074 systemd[1]: Reload requested from client PID 2675 ('systemctl') (unit session-7.scope)... May 27 17:42:22.696095 systemd[1]: Reloading... May 27 17:42:22.858324 zram_generator::config[2722]: No configuration found. May 27 17:42:22.971616 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:42:23.167100 systemd[1]: Reloading finished in 470 ms. May 27 17:42:23.214434 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:42:23.228079 systemd[1]: kubelet.service: Deactivated successfully. May 27 17:42:23.228479 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:42:23.228573 systemd[1]: kubelet.service: Consumed 1.638s CPU time, 129.5M memory peak. May 27 17:42:23.232525 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:42:23.527586 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:42:23.542956 (kubelet)[2767]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 17:42:23.614109 kubelet[2767]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:42:23.614109 kubelet[2767]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 17:42:23.614109 kubelet[2767]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:42:23.614745 kubelet[2767]: I0527 17:42:23.614174 2767 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 17:42:23.625657 kubelet[2767]: I0527 17:42:23.625507 2767 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 17:42:23.625657 kubelet[2767]: I0527 17:42:23.625540 2767 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 17:42:23.625936 kubelet[2767]: I0527 17:42:23.625912 2767 server.go:954] "Client rotation is on, will bootstrap in background" May 27 17:42:23.627396 kubelet[2767]: I0527 17:42:23.627355 2767 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 27 17:42:23.630255 kubelet[2767]: I0527 17:42:23.630199 2767 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:42:23.636406 kubelet[2767]: I0527 17:42:23.636292 2767 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 17:42:23.640521 kubelet[2767]: I0527 17:42:23.640487 2767 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 17:42:23.640895 kubelet[2767]: I0527 17:42:23.640839 2767 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 17:42:23.641131 kubelet[2767]: I0527 17:42:23.640880 2767 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 17:42:23.641131 kubelet[2767]: I0527 17:42:23.641129 2767 topology_manager.go:138] "Creating topology manager with none policy" May 27 17:42:23.641396 kubelet[2767]: I0527 17:42:23.641148 2767 container_manager_linux.go:304] "Creating device plugin manager" May 27 17:42:23.641396 kubelet[2767]: I0527 17:42:23.641219 2767 state_mem.go:36] "Initialized new in-memory state store" May 27 17:42:23.641488 kubelet[2767]: I0527 17:42:23.641458 2767 kubelet.go:446] "Attempting to sync node with API server" May 27 17:42:23.641538 kubelet[2767]: I0527 17:42:23.641502 2767 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 17:42:23.641538 kubelet[2767]: I0527 17:42:23.641534 2767 kubelet.go:352] "Adding apiserver pod source" May 27 17:42:23.641636 kubelet[2767]: I0527 17:42:23.641549 2767 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 17:42:23.648298 kubelet[2767]: I0527 17:42:23.645861 2767 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 17:42:23.648645 kubelet[2767]: I0527 17:42:23.648626 2767 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 17:42:23.649349 kubelet[2767]: I0527 17:42:23.649327 2767 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 17:42:23.649494 kubelet[2767]: I0527 17:42:23.649482 2767 server.go:1287] "Started kubelet" May 27 17:42:23.657166 kubelet[2767]: I0527 17:42:23.657120 2767 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 17:42:23.659590 kubelet[2767]: I0527 17:42:23.659566 2767 server.go:479] "Adding debug handlers to kubelet server" May 27 17:42:23.662228 kubelet[2767]: I0527 17:42:23.662198 2767 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 17:42:23.662750 kubelet[2767]: I0527 17:42:23.662688 2767 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 17:42:23.663132 kubelet[2767]: I0527 17:42:23.663102 2767 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 17:42:23.673057 kubelet[2767]: I0527 17:42:23.673027 2767 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 17:42:23.679618 kubelet[2767]: I0527 17:42:23.679590 2767 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 17:42:23.680065 kubelet[2767]: E0527 17:42:23.680029 2767 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" not found" May 27 17:42:23.682300 kubelet[2767]: I0527 17:42:23.680777 2767 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 17:42:23.682615 kubelet[2767]: I0527 17:42:23.682598 2767 reconciler.go:26] "Reconciler: start to sync state" May 27 17:42:23.687174 kubelet[2767]: I0527 17:42:23.687140 2767 factory.go:221] Registration of the systemd container factory successfully May 27 17:42:23.688342 kubelet[2767]: I0527 17:42:23.688313 2767 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 17:42:23.702113 kubelet[2767]: I0527 17:42:23.698041 2767 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 17:42:23.702113 kubelet[2767]: I0527 17:42:23.701134 2767 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 17:42:23.702113 kubelet[2767]: I0527 17:42:23.701160 2767 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 17:42:23.702113 kubelet[2767]: I0527 17:42:23.701184 2767 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 17:42:23.702113 kubelet[2767]: I0527 17:42:23.701196 2767 kubelet.go:2382] "Starting kubelet main sync loop" May 27 17:42:23.702113 kubelet[2767]: E0527 17:42:23.701264 2767 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 17:42:23.712710 kubelet[2767]: E0527 17:42:23.712462 2767 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 17:42:23.723746 kubelet[2767]: I0527 17:42:23.722194 2767 factory.go:221] Registration of the containerd container factory successfully May 27 17:42:23.811148 kubelet[2767]: E0527 17:42:23.809999 2767 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 17:42:23.812996 kubelet[2767]: I0527 17:42:23.812408 2767 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 17:42:23.812996 kubelet[2767]: I0527 17:42:23.812434 2767 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 17:42:23.812996 kubelet[2767]: I0527 17:42:23.812459 2767 state_mem.go:36] "Initialized new in-memory state store" May 27 17:42:23.812996 kubelet[2767]: I0527 17:42:23.812692 2767 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 17:42:23.812996 kubelet[2767]: I0527 17:42:23.812707 2767 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 17:42:23.812996 kubelet[2767]: I0527 17:42:23.812734 2767 policy_none.go:49] "None policy: Start" May 27 17:42:23.812996 kubelet[2767]: I0527 17:42:23.812748 2767 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 17:42:23.812996 kubelet[2767]: I0527 17:42:23.812762 2767 state_mem.go:35] "Initializing new in-memory state store" May 27 17:42:23.812996 kubelet[2767]: I0527 17:42:23.812943 2767 state_mem.go:75] "Updated machine memory state" May 27 17:42:23.820602 kubelet[2767]: I0527 17:42:23.820578 2767 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 17:42:23.820931 kubelet[2767]: I0527 17:42:23.820914 2767 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 17:42:23.821118 kubelet[2767]: I0527 17:42:23.821074 2767 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 17:42:23.821817 kubelet[2767]: I0527 17:42:23.821797 2767 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 17:42:23.828261 kubelet[2767]: E0527 17:42:23.825171 2767 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 17:42:23.944339 kubelet[2767]: I0527 17:42:23.944304 2767 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:23.955744 kubelet[2767]: I0527 17:42:23.955702 2767 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:23.956251 kubelet[2767]: I0527 17:42:23.955990 2767 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:24.011155 kubelet[2767]: I0527 17:42:24.011115 2767 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:24.011677 kubelet[2767]: I0527 17:42:24.011652 2767 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:24.012780 kubelet[2767]: I0527 17:42:24.011720 2767 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:24.021312 kubelet[2767]: W0527 17:42:24.021254 2767 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] May 27 17:42:24.021446 kubelet[2767]: W0527 17:42:24.021344 2767 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] May 27 17:42:24.021509 kubelet[2767]: W0527 17:42:24.021457 2767 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] May 27 17:42:24.022853 kubelet[2767]: E0527 17:42:24.021762 2767 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-controller-manager-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:24.084485 kubelet[2767]: I0527 17:42:24.084449 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e1b42a64dd7bdf2994a9f4657e3c85ef-kubeconfig\") pod \"kube-controller-manager-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" (UID: \"e1b42a64dd7bdf2994a9f4657e3c85ef\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:24.084773 kubelet[2767]: I0527 17:42:24.084708 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e1b42a64dd7bdf2994a9f4657e3c85ef-k8s-certs\") pod \"kube-controller-manager-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" (UID: \"e1b42a64dd7bdf2994a9f4657e3c85ef\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:24.084976 kubelet[2767]: I0527 17:42:24.084914 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/90dcfa5818e014744ee381bc8e6a61a8-ca-certs\") pod \"kube-apiserver-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" (UID: \"90dcfa5818e014744ee381bc8e6a61a8\") " pod="kube-system/kube-apiserver-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:24.085144 kubelet[2767]: I0527 17:42:24.085094 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/90dcfa5818e014744ee381bc8e6a61a8-k8s-certs\") pod \"kube-apiserver-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" (UID: \"90dcfa5818e014744ee381bc8e6a61a8\") " pod="kube-system/kube-apiserver-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:24.085344 kubelet[2767]: I0527 17:42:24.085319 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/90dcfa5818e014744ee381bc8e6a61a8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" (UID: \"90dcfa5818e014744ee381bc8e6a61a8\") " pod="kube-system/kube-apiserver-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:24.085506 kubelet[2767]: I0527 17:42:24.085475 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e1b42a64dd7bdf2994a9f4657e3c85ef-ca-certs\") pod \"kube-controller-manager-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" (UID: \"e1b42a64dd7bdf2994a9f4657e3c85ef\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:24.085720 kubelet[2767]: I0527 17:42:24.085671 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e1b42a64dd7bdf2994a9f4657e3c85ef-flexvolume-dir\") pod \"kube-controller-manager-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" (UID: \"e1b42a64dd7bdf2994a9f4657e3c85ef\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:24.085874 kubelet[2767]: I0527 17:42:24.085823 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e1b42a64dd7bdf2994a9f4657e3c85ef-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" (UID: \"e1b42a64dd7bdf2994a9f4657e3c85ef\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:24.086054 kubelet[2767]: I0527 17:42:24.086032 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/806befc10d530544be7c50c02f4a2f89-kubeconfig\") pod \"kube-scheduler-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" (UID: \"806befc10d530544be7c50c02f4a2f89\") " pod="kube-system/kube-scheduler-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:42:24.643303 kubelet[2767]: I0527 17:42:24.642086 2767 apiserver.go:52] "Watching apiserver" May 27 17:42:24.683006 kubelet[2767]: I0527 17:42:24.682933 2767 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 17:42:24.794537 kubelet[2767]: I0527 17:42:24.794441 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" podStartSLOduration=0.794416791 podStartE2EDuration="794.416791ms" podCreationTimestamp="2025-05-27 17:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:42:24.782610335 +0000 UTC m=+1.232074774" watchObservedRunningTime="2025-05-27 17:42:24.794416791 +0000 UTC m=+1.243881232" May 27 17:42:24.808853 kubelet[2767]: I0527 17:42:24.808764 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" podStartSLOduration=3.808738473 podStartE2EDuration="3.808738473s" podCreationTimestamp="2025-05-27 17:42:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:42:24.794244944 +0000 UTC m=+1.243709383" watchObservedRunningTime="2025-05-27 17:42:24.808738473 +0000 UTC m=+1.258202911" May 27 17:42:24.825162 kubelet[2767]: I0527 17:42:24.825085 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" podStartSLOduration=0.825063345 podStartE2EDuration="825.063345ms" podCreationTimestamp="2025-05-27 17:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:42:24.809571875 +0000 UTC m=+1.259036307" watchObservedRunningTime="2025-05-27 17:42:24.825063345 +0000 UTC m=+1.274527781" May 27 17:42:29.035700 kubelet[2767]: I0527 17:42:29.035648 2767 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 17:42:29.036657 kubelet[2767]: I0527 17:42:29.036566 2767 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 17:42:29.036725 containerd[1563]: time="2025-05-27T17:42:29.036057619Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 17:42:29.786059 systemd[1]: Created slice kubepods-besteffort-podcadcd286_547a_4a8e_99ea_b0570b1ec2b9.slice - libcontainer container kubepods-besteffort-podcadcd286_547a_4a8e_99ea_b0570b1ec2b9.slice. May 27 17:42:29.825657 kubelet[2767]: I0527 17:42:29.825486 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cadcd286-547a-4a8e-99ea-b0570b1ec2b9-xtables-lock\") pod \"kube-proxy-j7qx9\" (UID: \"cadcd286-547a-4a8e-99ea-b0570b1ec2b9\") " pod="kube-system/kube-proxy-j7qx9" May 27 17:42:29.825657 kubelet[2767]: I0527 17:42:29.825544 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt7df\" (UniqueName: \"kubernetes.io/projected/cadcd286-547a-4a8e-99ea-b0570b1ec2b9-kube-api-access-jt7df\") pod \"kube-proxy-j7qx9\" (UID: \"cadcd286-547a-4a8e-99ea-b0570b1ec2b9\") " pod="kube-system/kube-proxy-j7qx9" May 27 17:42:29.825657 kubelet[2767]: I0527 17:42:29.825570 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cadcd286-547a-4a8e-99ea-b0570b1ec2b9-kube-proxy\") pod \"kube-proxy-j7qx9\" (UID: \"cadcd286-547a-4a8e-99ea-b0570b1ec2b9\") " pod="kube-system/kube-proxy-j7qx9" May 27 17:42:29.825657 kubelet[2767]: I0527 17:42:29.825591 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cadcd286-547a-4a8e-99ea-b0570b1ec2b9-lib-modules\") pod \"kube-proxy-j7qx9\" (UID: \"cadcd286-547a-4a8e-99ea-b0570b1ec2b9\") " pod="kube-system/kube-proxy-j7qx9" May 27 17:42:30.098087 containerd[1563]: time="2025-05-27T17:42:30.098019261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j7qx9,Uid:cadcd286-547a-4a8e-99ea-b0570b1ec2b9,Namespace:kube-system,Attempt:0,}" May 27 17:42:30.130359 containerd[1563]: time="2025-05-27T17:42:30.130260866Z" level=info msg="connecting to shim 2af0562dcb7ccca1d04195e143a03405adf6b6074c46caa9a769f78b7f30ecdb" address="unix:///run/containerd/s/90c205b2100ea467d0b066660479e920c6f3f1b060bf45fe710a81bdb3653c45" namespace=k8s.io protocol=ttrpc version=3 May 27 17:42:30.173591 systemd[1]: Started cri-containerd-2af0562dcb7ccca1d04195e143a03405adf6b6074c46caa9a769f78b7f30ecdb.scope - libcontainer container 2af0562dcb7ccca1d04195e143a03405adf6b6074c46caa9a769f78b7f30ecdb. May 27 17:42:30.245920 containerd[1563]: time="2025-05-27T17:42:30.245775687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j7qx9,Uid:cadcd286-547a-4a8e-99ea-b0570b1ec2b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"2af0562dcb7ccca1d04195e143a03405adf6b6074c46caa9a769f78b7f30ecdb\"" May 27 17:42:30.253476 containerd[1563]: time="2025-05-27T17:42:30.253347761Z" level=info msg="CreateContainer within sandbox \"2af0562dcb7ccca1d04195e143a03405adf6b6074c46caa9a769f78b7f30ecdb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 17:42:30.277333 systemd[1]: Created slice kubepods-besteffort-pod9a646b5e_455a_4fa0_b1cf_7fc1fb8c7e20.slice - libcontainer container kubepods-besteffort-pod9a646b5e_455a_4fa0_b1cf_7fc1fb8c7e20.slice. May 27 17:42:30.286018 containerd[1563]: time="2025-05-27T17:42:30.283572208Z" level=info msg="Container 92588091725d7c7416ef8ba8455275f5af462911c5dc38ef8c75bc734f0d532c: CDI devices from CRI Config.CDIDevices: []" May 27 17:42:30.302353 containerd[1563]: time="2025-05-27T17:42:30.302294486Z" level=info msg="CreateContainer within sandbox \"2af0562dcb7ccca1d04195e143a03405adf6b6074c46caa9a769f78b7f30ecdb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"92588091725d7c7416ef8ba8455275f5af462911c5dc38ef8c75bc734f0d532c\"" May 27 17:42:30.303333 containerd[1563]: time="2025-05-27T17:42:30.303188623Z" level=info msg="StartContainer for \"92588091725d7c7416ef8ba8455275f5af462911c5dc38ef8c75bc734f0d532c\"" May 27 17:42:30.306174 containerd[1563]: time="2025-05-27T17:42:30.306132932Z" level=info msg="connecting to shim 92588091725d7c7416ef8ba8455275f5af462911c5dc38ef8c75bc734f0d532c" address="unix:///run/containerd/s/90c205b2100ea467d0b066660479e920c6f3f1b060bf45fe710a81bdb3653c45" protocol=ttrpc version=3 May 27 17:42:30.328529 kubelet[2767]: I0527 17:42:30.328488 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv5h2\" (UniqueName: \"kubernetes.io/projected/9a646b5e-455a-4fa0-b1cf-7fc1fb8c7e20-kube-api-access-nv5h2\") pod \"tigera-operator-844669ff44-vd8lz\" (UID: \"9a646b5e-455a-4fa0-b1cf-7fc1fb8c7e20\") " pod="tigera-operator/tigera-operator-844669ff44-vd8lz" May 27 17:42:30.330043 kubelet[2767]: I0527 17:42:30.329571 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9a646b5e-455a-4fa0-b1cf-7fc1fb8c7e20-var-lib-calico\") pod \"tigera-operator-844669ff44-vd8lz\" (UID: \"9a646b5e-455a-4fa0-b1cf-7fc1fb8c7e20\") " pod="tigera-operator/tigera-operator-844669ff44-vd8lz" May 27 17:42:30.335560 systemd[1]: Started cri-containerd-92588091725d7c7416ef8ba8455275f5af462911c5dc38ef8c75bc734f0d532c.scope - libcontainer container 92588091725d7c7416ef8ba8455275f5af462911c5dc38ef8c75bc734f0d532c. May 27 17:42:30.402664 containerd[1563]: time="2025-05-27T17:42:30.401956936Z" level=info msg="StartContainer for \"92588091725d7c7416ef8ba8455275f5af462911c5dc38ef8c75bc734f0d532c\" returns successfully" May 27 17:42:30.584317 containerd[1563]: time="2025-05-27T17:42:30.584241542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-vd8lz,Uid:9a646b5e-455a-4fa0-b1cf-7fc1fb8c7e20,Namespace:tigera-operator,Attempt:0,}" May 27 17:42:30.614759 containerd[1563]: time="2025-05-27T17:42:30.614654031Z" level=info msg="connecting to shim 1cdd78989f445fc2851ebc73109b3dbebddcc4e31c9881669182e2f37f09c601" address="unix:///run/containerd/s/05dfb70d2d7e96f9fcaec3a917c7ba61c98fb504defc97fcdb4b265e3aa9400c" namespace=k8s.io protocol=ttrpc version=3 May 27 17:42:30.645520 systemd[1]: Started cri-containerd-1cdd78989f445fc2851ebc73109b3dbebddcc4e31c9881669182e2f37f09c601.scope - libcontainer container 1cdd78989f445fc2851ebc73109b3dbebddcc4e31c9881669182e2f37f09c601. May 27 17:42:30.727028 containerd[1563]: time="2025-05-27T17:42:30.726796537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-vd8lz,Uid:9a646b5e-455a-4fa0-b1cf-7fc1fb8c7e20,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1cdd78989f445fc2851ebc73109b3dbebddcc4e31c9881669182e2f37f09c601\"" May 27 17:42:30.732132 containerd[1563]: time="2025-05-27T17:42:30.732080987Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 27 17:42:31.464762 kubelet[2767]: I0527 17:42:31.464393 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j7qx9" podStartSLOduration=2.464365943 podStartE2EDuration="2.464365943s" podCreationTimestamp="2025-05-27 17:42:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:42:30.803208784 +0000 UTC m=+7.252673223" watchObservedRunningTime="2025-05-27 17:42:31.464365943 +0000 UTC m=+7.913830385" May 27 17:42:31.880241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4147993538.mount: Deactivated successfully. May 27 17:42:33.063606 containerd[1563]: time="2025-05-27T17:42:33.063552415Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:42:33.065006 containerd[1563]: time="2025-05-27T17:42:33.064924962Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 27 17:42:33.066664 containerd[1563]: time="2025-05-27T17:42:33.066580843Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:42:33.069770 containerd[1563]: time="2025-05-27T17:42:33.069692439Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:42:33.070833 containerd[1563]: time="2025-05-27T17:42:33.070650398Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 2.338520224s" May 27 17:42:33.070833 containerd[1563]: time="2025-05-27T17:42:33.070693991Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 27 17:42:33.074864 containerd[1563]: time="2025-05-27T17:42:33.074823665Z" level=info msg="CreateContainer within sandbox \"1cdd78989f445fc2851ebc73109b3dbebddcc4e31c9881669182e2f37f09c601\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 27 17:42:33.088302 containerd[1563]: time="2025-05-27T17:42:33.086968325Z" level=info msg="Container 3098804cde778bcd07a424946bd0293703716fe2ded5fb01e77d047b749f37f3: CDI devices from CRI Config.CDIDevices: []" May 27 17:42:33.094368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount164461218.mount: Deactivated successfully. May 27 17:42:33.102625 containerd[1563]: time="2025-05-27T17:42:33.102567273Z" level=info msg="CreateContainer within sandbox \"1cdd78989f445fc2851ebc73109b3dbebddcc4e31c9881669182e2f37f09c601\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3098804cde778bcd07a424946bd0293703716fe2ded5fb01e77d047b749f37f3\"" May 27 17:42:33.104234 containerd[1563]: time="2025-05-27T17:42:33.103269954Z" level=info msg="StartContainer for \"3098804cde778bcd07a424946bd0293703716fe2ded5fb01e77d047b749f37f3\"" May 27 17:42:33.105618 containerd[1563]: time="2025-05-27T17:42:33.105552997Z" level=info msg="connecting to shim 3098804cde778bcd07a424946bd0293703716fe2ded5fb01e77d047b749f37f3" address="unix:///run/containerd/s/05dfb70d2d7e96f9fcaec3a917c7ba61c98fb504defc97fcdb4b265e3aa9400c" protocol=ttrpc version=3 May 27 17:42:33.151530 systemd[1]: Started cri-containerd-3098804cde778bcd07a424946bd0293703716fe2ded5fb01e77d047b749f37f3.scope - libcontainer container 3098804cde778bcd07a424946bd0293703716fe2ded5fb01e77d047b749f37f3. May 27 17:42:33.201549 containerd[1563]: time="2025-05-27T17:42:33.201493384Z" level=info msg="StartContainer for \"3098804cde778bcd07a424946bd0293703716fe2ded5fb01e77d047b749f37f3\" returns successfully" May 27 17:42:34.244255 update_engine[1549]: I20250527 17:42:34.244148 1549 update_attempter.cc:509] Updating boot flags... May 27 17:42:40.925677 sudo[1851]: pam_unix(sudo:session): session closed for user root May 27 17:42:40.978692 sshd[1850]: Connection closed by 139.178.68.195 port 35480 May 27 17:42:40.979232 sshd-session[1848]: pam_unix(sshd:session): session closed for user core May 27 17:42:40.987951 systemd[1]: sshd@6-10.128.0.9:22-139.178.68.195:35480.service: Deactivated successfully. May 27 17:42:40.995234 systemd[1]: session-7.scope: Deactivated successfully. May 27 17:42:40.997172 systemd[1]: session-7.scope: Consumed 7.774s CPU time, 230.5M memory peak. May 27 17:42:41.006926 systemd-logind[1546]: Session 7 logged out. Waiting for processes to exit. May 27 17:42:41.009352 systemd-logind[1546]: Removed session 7. May 27 17:42:46.184225 kubelet[2767]: I0527 17:42:46.184146 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-vd8lz" podStartSLOduration=13.841144907 podStartE2EDuration="16.184122472s" podCreationTimestamp="2025-05-27 17:42:30 +0000 UTC" firstStartedPulling="2025-05-27 17:42:30.729090552 +0000 UTC m=+7.178554981" lastFinishedPulling="2025-05-27 17:42:33.072068114 +0000 UTC m=+9.521532546" observedRunningTime="2025-05-27 17:42:33.806136883 +0000 UTC m=+10.255601321" watchObservedRunningTime="2025-05-27 17:42:46.184122472 +0000 UTC m=+22.633586912" May 27 17:42:46.205811 kubelet[2767]: I0527 17:42:46.205738 2767 status_manager.go:890] "Failed to get status for pod" podUID="c5be396b-9d72-40ae-8e92-93db1f23e850" pod="calico-system/calico-typha-754d8cc679-pshgw" err="pods \"calico-typha-754d8cc679-pshgw\" is forbidden: User \"system:node:ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal' and this object" May 27 17:42:46.207861 systemd[1]: Created slice kubepods-besteffort-podc5be396b_9d72_40ae_8e92_93db1f23e850.slice - libcontainer container kubepods-besteffort-podc5be396b_9d72_40ae_8e92_93db1f23e850.slice. May 27 17:42:46.212899 kubelet[2767]: W0527 17:42:46.212841 2767 reflector.go:569] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal' and this object May 27 17:42:46.213033 kubelet[2767]: E0527 17:42:46.212929 2767 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal' and this object" logger="UnhandledError" May 27 17:42:46.237558 kubelet[2767]: I0527 17:42:46.237468 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5be396b-9d72-40ae-8e92-93db1f23e850-tigera-ca-bundle\") pod \"calico-typha-754d8cc679-pshgw\" (UID: \"c5be396b-9d72-40ae-8e92-93db1f23e850\") " pod="calico-system/calico-typha-754d8cc679-pshgw" May 27 17:42:46.237558 kubelet[2767]: I0527 17:42:46.237572 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c5be396b-9d72-40ae-8e92-93db1f23e850-typha-certs\") pod \"calico-typha-754d8cc679-pshgw\" (UID: \"c5be396b-9d72-40ae-8e92-93db1f23e850\") " pod="calico-system/calico-typha-754d8cc679-pshgw" May 27 17:42:46.237832 kubelet[2767]: I0527 17:42:46.237606 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6ftb\" (UniqueName: \"kubernetes.io/projected/c5be396b-9d72-40ae-8e92-93db1f23e850-kube-api-access-v6ftb\") pod \"calico-typha-754d8cc679-pshgw\" (UID: \"c5be396b-9d72-40ae-8e92-93db1f23e850\") " pod="calico-system/calico-typha-754d8cc679-pshgw" May 27 17:42:46.448939 systemd[1]: Created slice kubepods-besteffort-pod30f1b549_0020_4d3c_8f36_7bd15a0632c2.slice - libcontainer container kubepods-besteffort-pod30f1b549_0020_4d3c_8f36_7bd15a0632c2.slice. May 27 17:42:46.539961 kubelet[2767]: I0527 17:42:46.539907 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/30f1b549-0020-4d3c-8f36-7bd15a0632c2-var-lib-calico\") pod \"calico-node-57clp\" (UID: \"30f1b549-0020-4d3c-8f36-7bd15a0632c2\") " pod="calico-system/calico-node-57clp" May 27 17:42:46.539961 kubelet[2767]: I0527 17:42:46.539967 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/30f1b549-0020-4d3c-8f36-7bd15a0632c2-cni-bin-dir\") pod \"calico-node-57clp\" (UID: \"30f1b549-0020-4d3c-8f36-7bd15a0632c2\") " pod="calico-system/calico-node-57clp" May 27 17:42:46.540224 kubelet[2767]: I0527 17:42:46.539997 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30f1b549-0020-4d3c-8f36-7bd15a0632c2-lib-modules\") pod \"calico-node-57clp\" (UID: \"30f1b549-0020-4d3c-8f36-7bd15a0632c2\") " pod="calico-system/calico-node-57clp" May 27 17:42:46.540224 kubelet[2767]: I0527 17:42:46.540020 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/30f1b549-0020-4d3c-8f36-7bd15a0632c2-policysync\") pod \"calico-node-57clp\" (UID: \"30f1b549-0020-4d3c-8f36-7bd15a0632c2\") " pod="calico-system/calico-node-57clp" May 27 17:42:46.540224 kubelet[2767]: I0527 17:42:46.540045 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/30f1b549-0020-4d3c-8f36-7bd15a0632c2-var-run-calico\") pod \"calico-node-57clp\" (UID: \"30f1b549-0020-4d3c-8f36-7bd15a0632c2\") " pod="calico-system/calico-node-57clp" May 27 17:42:46.540224 kubelet[2767]: I0527 17:42:46.540070 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30f1b549-0020-4d3c-8f36-7bd15a0632c2-xtables-lock\") pod \"calico-node-57clp\" (UID: \"30f1b549-0020-4d3c-8f36-7bd15a0632c2\") " pod="calico-system/calico-node-57clp" May 27 17:42:46.540224 kubelet[2767]: I0527 17:42:46.540106 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n644h\" (UniqueName: \"kubernetes.io/projected/30f1b549-0020-4d3c-8f36-7bd15a0632c2-kube-api-access-n644h\") pod \"calico-node-57clp\" (UID: \"30f1b549-0020-4d3c-8f36-7bd15a0632c2\") " pod="calico-system/calico-node-57clp" May 27 17:42:46.540499 kubelet[2767]: I0527 17:42:46.540133 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/30f1b549-0020-4d3c-8f36-7bd15a0632c2-cni-net-dir\") pod \"calico-node-57clp\" (UID: \"30f1b549-0020-4d3c-8f36-7bd15a0632c2\") " pod="calico-system/calico-node-57clp" May 27 17:42:46.540499 kubelet[2767]: I0527 17:42:46.540157 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/30f1b549-0020-4d3c-8f36-7bd15a0632c2-flexvol-driver-host\") pod \"calico-node-57clp\" (UID: \"30f1b549-0020-4d3c-8f36-7bd15a0632c2\") " pod="calico-system/calico-node-57clp" May 27 17:42:46.540499 kubelet[2767]: I0527 17:42:46.540185 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/30f1b549-0020-4d3c-8f36-7bd15a0632c2-cni-log-dir\") pod \"calico-node-57clp\" (UID: \"30f1b549-0020-4d3c-8f36-7bd15a0632c2\") " pod="calico-system/calico-node-57clp" May 27 17:42:46.540499 kubelet[2767]: I0527 17:42:46.540213 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/30f1b549-0020-4d3c-8f36-7bd15a0632c2-node-certs\") pod \"calico-node-57clp\" (UID: \"30f1b549-0020-4d3c-8f36-7bd15a0632c2\") " pod="calico-system/calico-node-57clp" May 27 17:42:46.540499 kubelet[2767]: I0527 17:42:46.540250 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30f1b549-0020-4d3c-8f36-7bd15a0632c2-tigera-ca-bundle\") pod \"calico-node-57clp\" (UID: \"30f1b549-0020-4d3c-8f36-7bd15a0632c2\") " pod="calico-system/calico-node-57clp" May 27 17:42:46.648045 kubelet[2767]: E0527 17:42:46.647514 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.648045 kubelet[2767]: W0527 17:42:46.647546 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.648045 kubelet[2767]: E0527 17:42:46.647591 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.649065 kubelet[2767]: E0527 17:42:46.648891 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.649065 kubelet[2767]: W0527 17:42:46.648914 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.649065 kubelet[2767]: E0527 17:42:46.648938 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.655360 kubelet[2767]: E0527 17:42:46.655337 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.655665 kubelet[2767]: W0527 17:42:46.655492 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.655665 kubelet[2767]: E0527 17:42:46.655552 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.660816 kubelet[2767]: E0527 17:42:46.660620 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.660816 kubelet[2767]: W0527 17:42:46.660646 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.661069 kubelet[2767]: E0527 17:42:46.661051 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.661208 kubelet[2767]: W0527 17:42:46.661192 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.661375 kubelet[2767]: E0527 17:42:46.661328 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.661482 kubelet[2767]: E0527 17:42:46.661162 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.695536 kubelet[2767]: E0527 17:42:46.695406 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.695536 kubelet[2767]: W0527 17:42:46.695438 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.695536 kubelet[2767]: E0527 17:42:46.695470 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.759369 containerd[1563]: time="2025-05-27T17:42:46.758885689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-57clp,Uid:30f1b549-0020-4d3c-8f36-7bd15a0632c2,Namespace:calico-system,Attempt:0,}" May 27 17:42:46.800093 containerd[1563]: time="2025-05-27T17:42:46.800036411Z" level=info msg="connecting to shim cfbaf2cd85d6ba9592ac3d7fd7adf75f1c60a8d66000c455f025d70d2968b795" address="unix:///run/containerd/s/3d9cc141c0ea4dfbb9dac23b6d706a4f3c81c2f658ebf4f7efd1640a145c48e2" namespace=k8s.io protocol=ttrpc version=3 May 27 17:42:46.818340 kubelet[2767]: E0527 17:42:46.816690 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4vmcb" podUID="bae642d2-ac66-4c94-b401-a475f27bd04d" May 27 17:42:46.825859 kubelet[2767]: E0527 17:42:46.825441 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.825859 kubelet[2767]: W0527 17:42:46.825470 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.825859 kubelet[2767]: E0527 17:42:46.825519 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.832329 kubelet[2767]: E0527 17:42:46.830946 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.832329 kubelet[2767]: W0527 17:42:46.830971 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.832329 kubelet[2767]: E0527 17:42:46.830998 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.837969 kubelet[2767]: E0527 17:42:46.837719 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.837969 kubelet[2767]: W0527 17:42:46.837745 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.837969 kubelet[2767]: E0527 17:42:46.837772 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.839531 kubelet[2767]: E0527 17:42:46.839330 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.839531 kubelet[2767]: W0527 17:42:46.839351 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.839531 kubelet[2767]: E0527 17:42:46.839373 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.841728 kubelet[2767]: E0527 17:42:46.841699 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.841728 kubelet[2767]: W0527 17:42:46.841726 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.841913 kubelet[2767]: E0527 17:42:46.841760 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.845948 kubelet[2767]: E0527 17:42:46.844560 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.845948 kubelet[2767]: W0527 17:42:46.844583 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.845948 kubelet[2767]: E0527 17:42:46.844605 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.850955 kubelet[2767]: E0527 17:42:46.850920 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.850955 kubelet[2767]: W0527 17:42:46.850951 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.851149 kubelet[2767]: E0527 17:42:46.850977 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.851557 kubelet[2767]: E0527 17:42:46.851523 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.851557 kubelet[2767]: W0527 17:42:46.851549 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.851733 kubelet[2767]: E0527 17:42:46.851570 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.855497 kubelet[2767]: E0527 17:42:46.855455 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.855497 kubelet[2767]: W0527 17:42:46.855484 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.855678 kubelet[2767]: E0527 17:42:46.855531 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.855678 kubelet[2767]: I0527 17:42:46.855571 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bae642d2-ac66-4c94-b401-a475f27bd04d-registration-dir\") pod \"csi-node-driver-4vmcb\" (UID: \"bae642d2-ac66-4c94-b401-a475f27bd04d\") " pod="calico-system/csi-node-driver-4vmcb" May 27 17:42:46.857409 kubelet[2767]: E0527 17:42:46.857374 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.857569 kubelet[2767]: W0527 17:42:46.857543 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.857726 kubelet[2767]: E0527 17:42:46.857701 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.858763 kubelet[2767]: I0527 17:42:46.858721 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bae642d2-ac66-4c94-b401-a475f27bd04d-kubelet-dir\") pod \"csi-node-driver-4vmcb\" (UID: \"bae642d2-ac66-4c94-b401-a475f27bd04d\") " pod="calico-system/csi-node-driver-4vmcb" May 27 17:42:46.860012 kubelet[2767]: E0527 17:42:46.859982 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.861079 kubelet[2767]: W0527 17:42:46.860011 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.861079 kubelet[2767]: E0527 17:42:46.860390 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.862340 kubelet[2767]: E0527 17:42:46.861966 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.862460 kubelet[2767]: W0527 17:42:46.862355 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.862798 kubelet[2767]: E0527 17:42:46.862619 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.864293 kubelet[2767]: E0527 17:42:46.864246 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.864407 kubelet[2767]: W0527 17:42:46.864311 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.864742 kubelet[2767]: E0527 17:42:46.864537 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.869346 kubelet[2767]: E0527 17:42:46.867659 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.869346 kubelet[2767]: W0527 17:42:46.867805 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.872238 kubelet[2767]: E0527 17:42:46.871214 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.874607 kubelet[2767]: E0527 17:42:46.873356 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.874607 kubelet[2767]: W0527 17:42:46.873378 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.874607 kubelet[2767]: E0527 17:42:46.874482 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.874607 kubelet[2767]: W0527 17:42:46.874514 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.874607 kubelet[2767]: E0527 17:42:46.874539 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.875767 kubelet[2767]: E0527 17:42:46.875589 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.876035 kubelet[2767]: E0527 17:42:46.875987 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.876348 kubelet[2767]: W0527 17:42:46.876170 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.876778 kubelet[2767]: E0527 17:42:46.876713 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.878750 kubelet[2767]: E0527 17:42:46.878580 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.878750 kubelet[2767]: W0527 17:42:46.878602 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.879133 kubelet[2767]: E0527 17:42:46.879043 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.882020 kubelet[2767]: E0527 17:42:46.881989 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.882187 kubelet[2767]: W0527 17:42:46.882016 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.882187 kubelet[2767]: E0527 17:42:46.882156 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.884087 kubelet[2767]: E0527 17:42:46.883308 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.884087 kubelet[2767]: W0527 17:42:46.883330 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.884087 kubelet[2767]: E0527 17:42:46.883351 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.884087 kubelet[2767]: E0527 17:42:46.883852 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.884087 kubelet[2767]: W0527 17:42:46.883867 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.884087 kubelet[2767]: E0527 17:42:46.883884 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.884471 kubelet[2767]: E0527 17:42:46.884209 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.884471 kubelet[2767]: W0527 17:42:46.884223 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.884471 kubelet[2767]: E0527 17:42:46.884243 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.884636 kubelet[2767]: E0527 17:42:46.884595 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.884636 kubelet[2767]: W0527 17:42:46.884608 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.884636 kubelet[2767]: E0527 17:42:46.884624 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.888298 kubelet[2767]: E0527 17:42:46.885002 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.888298 kubelet[2767]: W0527 17:42:46.885019 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.888298 kubelet[2767]: E0527 17:42:46.885054 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.888298 kubelet[2767]: E0527 17:42:46.885339 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.888298 kubelet[2767]: W0527 17:42:46.885353 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.888298 kubelet[2767]: E0527 17:42:46.885368 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.888298 kubelet[2767]: E0527 17:42:46.886024 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.888298 kubelet[2767]: W0527 17:42:46.886040 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.888298 kubelet[2767]: E0527 17:42:46.886058 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.893666 systemd[1]: Started cri-containerd-cfbaf2cd85d6ba9592ac3d7fd7adf75f1c60a8d66000c455f025d70d2968b795.scope - libcontainer container cfbaf2cd85d6ba9592ac3d7fd7adf75f1c60a8d66000c455f025d70d2968b795. May 27 17:42:46.981050 kubelet[2767]: E0527 17:42:46.980885 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.981050 kubelet[2767]: W0527 17:42:46.980914 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.981050 kubelet[2767]: E0527 17:42:46.980944 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.982240 kubelet[2767]: E0527 17:42:46.982096 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.982240 kubelet[2767]: W0527 17:42:46.982115 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.983097 kubelet[2767]: E0527 17:42:46.982508 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.983576 kubelet[2767]: E0527 17:42:46.983557 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.983713 kubelet[2767]: W0527 17:42:46.983691 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.983958 kubelet[2767]: E0527 17:42:46.983851 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.984891 kubelet[2767]: E0527 17:42:46.984657 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.984891 kubelet[2767]: W0527 17:42:46.984679 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.984891 kubelet[2767]: E0527 17:42:46.984705 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.984891 kubelet[2767]: I0527 17:42:46.984740 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bae642d2-ac66-4c94-b401-a475f27bd04d-varrun\") pod \"csi-node-driver-4vmcb\" (UID: \"bae642d2-ac66-4c94-b401-a475f27bd04d\") " pod="calico-system/csi-node-driver-4vmcb" May 27 17:42:46.986637 kubelet[2767]: E0527 17:42:46.986575 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.987377 kubelet[2767]: W0527 17:42:46.986756 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.987594 kubelet[2767]: E0527 17:42:46.987574 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.989431 kubelet[2767]: W0527 17:42:46.988370 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.989431 kubelet[2767]: E0527 17:42:46.987696 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.989431 kubelet[2767]: I0527 17:42:46.988621 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bae642d2-ac66-4c94-b401-a475f27bd04d-socket-dir\") pod \"csi-node-driver-4vmcb\" (UID: \"bae642d2-ac66-4c94-b401-a475f27bd04d\") " pod="calico-system/csi-node-driver-4vmcb" May 27 17:42:46.991604 kubelet[2767]: E0527 17:42:46.991423 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.991604 kubelet[2767]: W0527 17:42:46.991444 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.992056 kubelet[2767]: E0527 17:42:46.991855 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.992056 kubelet[2767]: E0527 17:42:46.991898 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.992056 kubelet[2767]: E0527 17:42:46.992019 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.992056 kubelet[2767]: W0527 17:42:46.992032 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.993171 kubelet[2767]: E0527 17:42:46.993028 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.993555 kubelet[2767]: W0527 17:42:46.993488 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.994032 kubelet[2767]: E0527 17:42:46.993258 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.994032 kubelet[2767]: E0527 17:42:46.993968 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.994032 kubelet[2767]: I0527 17:42:46.994005 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8h5p\" (UniqueName: \"kubernetes.io/projected/bae642d2-ac66-4c94-b401-a475f27bd04d-kube-api-access-w8h5p\") pod \"csi-node-driver-4vmcb\" (UID: \"bae642d2-ac66-4c94-b401-a475f27bd04d\") " pod="calico-system/csi-node-driver-4vmcb" May 27 17:42:46.996735 kubelet[2767]: E0527 17:42:46.996691 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.997140 kubelet[2767]: W0527 17:42:46.996914 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.997721 kubelet[2767]: E0527 17:42:46.997303 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:46.998503 kubelet[2767]: E0527 17:42:46.998451 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:46.998898 kubelet[2767]: W0527 17:42:46.998601 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:46.999415 kubelet[2767]: E0527 17:42:46.999076 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.000306 kubelet[2767]: E0527 17:42:46.999932 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.000554 kubelet[2767]: W0527 17:42:47.000518 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.000921 kubelet[2767]: E0527 17:42:47.000848 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.001510 kubelet[2767]: E0527 17:42:47.001489 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.001746 kubelet[2767]: W0527 17:42:47.001631 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.002204 kubelet[2767]: E0527 17:42:47.001859 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.002818 kubelet[2767]: E0527 17:42:47.002789 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.003004 kubelet[2767]: W0527 17:42:47.002922 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.003258 kubelet[2767]: E0527 17:42:47.003181 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.003778 kubelet[2767]: E0527 17:42:47.003736 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.003778 kubelet[2767]: W0527 17:42:47.003755 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.004053 kubelet[2767]: E0527 17:42:47.004014 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.004595 kubelet[2767]: E0527 17:42:47.004537 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.004595 kubelet[2767]: W0527 17:42:47.004556 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.004932 kubelet[2767]: E0527 17:42:47.004881 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.005358 kubelet[2767]: E0527 17:42:47.005304 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.005572 kubelet[2767]: W0527 17:42:47.005324 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.005880 kubelet[2767]: E0527 17:42:47.005624 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.006537 kubelet[2767]: E0527 17:42:47.006389 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.006537 kubelet[2767]: W0527 17:42:47.006424 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.006537 kubelet[2767]: E0527 17:42:47.006449 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.007427 kubelet[2767]: E0527 17:42:47.007347 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.007427 kubelet[2767]: W0527 17:42:47.007378 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.007680 kubelet[2767]: E0527 17:42:47.007398 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.024821 containerd[1563]: time="2025-05-27T17:42:47.024670942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-57clp,Uid:30f1b549-0020-4d3c-8f36-7bd15a0632c2,Namespace:calico-system,Attempt:0,} returns sandbox id \"cfbaf2cd85d6ba9592ac3d7fd7adf75f1c60a8d66000c455f025d70d2968b795\"" May 27 17:42:47.030962 containerd[1563]: time="2025-05-27T17:42:47.030888957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 27 17:42:47.103672 kubelet[2767]: E0527 17:42:47.103605 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.104124 kubelet[2767]: W0527 17:42:47.103637 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.104124 kubelet[2767]: E0527 17:42:47.103786 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.104943 kubelet[2767]: E0527 17:42:47.104869 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.104943 kubelet[2767]: W0527 17:42:47.104910 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.104943 kubelet[2767]: E0527 17:42:47.104946 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.105681 kubelet[2767]: E0527 17:42:47.105289 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.105681 kubelet[2767]: W0527 17:42:47.105306 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.105681 kubelet[2767]: E0527 17:42:47.105401 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.106504 kubelet[2767]: E0527 17:42:47.105802 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.106504 kubelet[2767]: W0527 17:42:47.105818 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.106504 kubelet[2767]: E0527 17:42:47.106097 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.106504 kubelet[2767]: E0527 17:42:47.106237 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.106504 kubelet[2767]: W0527 17:42:47.106250 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.106504 kubelet[2767]: E0527 17:42:47.106372 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.107610 kubelet[2767]: E0527 17:42:47.107579 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.107610 kubelet[2767]: W0527 17:42:47.107606 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.107959 kubelet[2767]: E0527 17:42:47.107636 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.108208 kubelet[2767]: E0527 17:42:47.108053 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.108208 kubelet[2767]: W0527 17:42:47.108075 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.108208 kubelet[2767]: E0527 17:42:47.108201 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.109222 kubelet[2767]: E0527 17:42:47.109089 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.109222 kubelet[2767]: W0527 17:42:47.109217 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.109532 kubelet[2767]: E0527 17:42:47.109243 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.110142 kubelet[2767]: E0527 17:42:47.110116 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.110517 kubelet[2767]: W0527 17:42:47.110142 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.110517 kubelet[2767]: E0527 17:42:47.110178 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.110907 kubelet[2767]: E0527 17:42:47.110876 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.110907 kubelet[2767]: W0527 17:42:47.110897 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.111663 kubelet[2767]: E0527 17:42:47.111314 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.111768 kubelet[2767]: E0527 17:42:47.111748 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.111944 kubelet[2767]: W0527 17:42:47.111768 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.111944 kubelet[2767]: E0527 17:42:47.111795 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.113040 kubelet[2767]: E0527 17:42:47.113000 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.113449 kubelet[2767]: W0527 17:42:47.113045 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.113449 kubelet[2767]: E0527 17:42:47.113085 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.114637 kubelet[2767]: E0527 17:42:47.114114 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.114637 kubelet[2767]: W0527 17:42:47.114134 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.114637 kubelet[2767]: E0527 17:42:47.114162 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.114838 kubelet[2767]: E0527 17:42:47.114784 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.114838 kubelet[2767]: W0527 17:42:47.114800 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.114838 kubelet[2767]: E0527 17:42:47.114818 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.115708 kubelet[2767]: E0527 17:42:47.115682 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.115708 kubelet[2767]: W0527 17:42:47.115708 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.115866 kubelet[2767]: E0527 17:42:47.115737 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.127809 kubelet[2767]: E0527 17:42:47.127771 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.127809 kubelet[2767]: W0527 17:42:47.127797 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.127992 kubelet[2767]: E0527 17:42:47.127821 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.339548 kubelet[2767]: E0527 17:42:47.339503 2767 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition May 27 17:42:47.340248 kubelet[2767]: E0527 17:42:47.339636 2767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5be396b-9d72-40ae-8e92-93db1f23e850-typha-certs podName:c5be396b-9d72-40ae-8e92-93db1f23e850 nodeName:}" failed. No retries permitted until 2025-05-27 17:42:47.839598132 +0000 UTC m=+24.289062568 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/c5be396b-9d72-40ae-8e92-93db1f23e850-typha-certs") pod "calico-typha-754d8cc679-pshgw" (UID: "c5be396b-9d72-40ae-8e92-93db1f23e850") : failed to sync secret cache: timed out waiting for the condition May 27 17:42:47.411953 kubelet[2767]: E0527 17:42:47.411912 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.411953 kubelet[2767]: W0527 17:42:47.411944 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.412160 kubelet[2767]: E0527 17:42:47.411977 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.513096 kubelet[2767]: E0527 17:42:47.513047 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.513096 kubelet[2767]: W0527 17:42:47.513075 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.513409 kubelet[2767]: E0527 17:42:47.513104 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.614704 kubelet[2767]: E0527 17:42:47.614567 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.614704 kubelet[2767]: W0527 17:42:47.614598 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.614704 kubelet[2767]: E0527 17:42:47.614625 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.716405 kubelet[2767]: E0527 17:42:47.716351 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.716784 kubelet[2767]: W0527 17:42:47.716493 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.716784 kubelet[2767]: E0527 17:42:47.716524 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.819745 kubelet[2767]: E0527 17:42:47.819684 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.820118 kubelet[2767]: W0527 17:42:47.819982 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.820118 kubelet[2767]: E0527 17:42:47.820039 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.924091 kubelet[2767]: E0527 17:42:47.922649 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.924091 kubelet[2767]: W0527 17:42:47.923070 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.924091 kubelet[2767]: E0527 17:42:47.923104 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.925709 kubelet[2767]: E0527 17:42:47.925641 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.925709 kubelet[2767]: W0527 17:42:47.925676 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.925709 kubelet[2767]: E0527 17:42:47.925706 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.927620 kubelet[2767]: E0527 17:42:47.927573 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.927620 kubelet[2767]: W0527 17:42:47.927595 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.927620 kubelet[2767]: E0527 17:42:47.927615 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.928184 kubelet[2767]: E0527 17:42:47.928040 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.928184 kubelet[2767]: W0527 17:42:47.928055 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.928184 kubelet[2767]: E0527 17:42:47.928073 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.928969 kubelet[2767]: E0527 17:42:47.928936 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.928969 kubelet[2767]: W0527 17:42:47.928956 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.929112 kubelet[2767]: E0527 17:42:47.928975 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.944026 kubelet[2767]: E0527 17:42:47.943951 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:42:47.944026 kubelet[2767]: W0527 17:42:47.943975 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:42:47.944026 kubelet[2767]: E0527 17:42:47.943998 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:42:47.989104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2312612475.mount: Deactivated successfully. May 27 17:42:48.020405 containerd[1563]: time="2025-05-27T17:42:48.020330272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-754d8cc679-pshgw,Uid:c5be396b-9d72-40ae-8e92-93db1f23e850,Namespace:calico-system,Attempt:0,}" May 27 17:42:48.075215 containerd[1563]: time="2025-05-27T17:42:48.074848413Z" level=info msg="connecting to shim a0ef1b57ac996a549a3ddb6f7ef5a2769ab754a1a069ff999d419a99f3e1bad3" address="unix:///run/containerd/s/d14ad1bad37476705a629451b56799e8a5db6e395684b813cc87750fe7d7a0b6" namespace=k8s.io protocol=ttrpc version=3 May 27 17:42:48.128530 systemd[1]: Started cri-containerd-a0ef1b57ac996a549a3ddb6f7ef5a2769ab754a1a069ff999d419a99f3e1bad3.scope - libcontainer container a0ef1b57ac996a549a3ddb6f7ef5a2769ab754a1a069ff999d419a99f3e1bad3. May 27 17:42:48.199336 containerd[1563]: time="2025-05-27T17:42:48.197644015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:42:48.200044 containerd[1563]: time="2025-05-27T17:42:48.199970956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=5934460" May 27 17:42:48.203114 containerd[1563]: time="2025-05-27T17:42:48.202712097Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:42:48.215204 containerd[1563]: time="2025-05-27T17:42:48.215156384Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:42:48.216406 containerd[1563]: time="2025-05-27T17:42:48.216351513Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 1.185292587s" May 27 17:42:48.216607 containerd[1563]: time="2025-05-27T17:42:48.216579900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 27 17:42:48.221658 containerd[1563]: time="2025-05-27T17:42:48.221477896Z" level=info msg="CreateContainer within sandbox \"cfbaf2cd85d6ba9592ac3d7fd7adf75f1c60a8d66000c455f025d70d2968b795\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 27 17:42:48.236145 containerd[1563]: time="2025-05-27T17:42:48.236080008Z" level=info msg="Container 1f73d831ed962b582ee1a299328f3524f265e39afc719844d3c9fe761cd84cc4: CDI devices from CRI Config.CDIDevices: []" May 27 17:42:48.240160 containerd[1563]: time="2025-05-27T17:42:48.240093213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-754d8cc679-pshgw,Uid:c5be396b-9d72-40ae-8e92-93db1f23e850,Namespace:calico-system,Attempt:0,} returns sandbox id \"a0ef1b57ac996a549a3ddb6f7ef5a2769ab754a1a069ff999d419a99f3e1bad3\"" May 27 17:42:48.242753 containerd[1563]: time="2025-05-27T17:42:48.242439448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 27 17:42:48.252707 containerd[1563]: time="2025-05-27T17:42:48.252661111Z" level=info msg="CreateContainer within sandbox \"cfbaf2cd85d6ba9592ac3d7fd7adf75f1c60a8d66000c455f025d70d2968b795\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1f73d831ed962b582ee1a299328f3524f265e39afc719844d3c9fe761cd84cc4\"" May 27 17:42:48.253700 containerd[1563]: time="2025-05-27T17:42:48.253622631Z" level=info msg="StartContainer for \"1f73d831ed962b582ee1a299328f3524f265e39afc719844d3c9fe761cd84cc4\"" May 27 17:42:48.257298 containerd[1563]: time="2025-05-27T17:42:48.256470957Z" level=info msg="connecting to shim 1f73d831ed962b582ee1a299328f3524f265e39afc719844d3c9fe761cd84cc4" address="unix:///run/containerd/s/3d9cc141c0ea4dfbb9dac23b6d706a4f3c81c2f658ebf4f7efd1640a145c48e2" protocol=ttrpc version=3 May 27 17:42:48.284815 systemd[1]: Started cri-containerd-1f73d831ed962b582ee1a299328f3524f265e39afc719844d3c9fe761cd84cc4.scope - libcontainer container 1f73d831ed962b582ee1a299328f3524f265e39afc719844d3c9fe761cd84cc4. May 27 17:42:48.349842 containerd[1563]: time="2025-05-27T17:42:48.349747890Z" level=info msg="StartContainer for \"1f73d831ed962b582ee1a299328f3524f265e39afc719844d3c9fe761cd84cc4\" returns successfully" May 27 17:42:48.370438 systemd[1]: cri-containerd-1f73d831ed962b582ee1a299328f3524f265e39afc719844d3c9fe761cd84cc4.scope: Deactivated successfully. May 27 17:42:48.374780 containerd[1563]: time="2025-05-27T17:42:48.374623940Z" level=info msg="received exit event container_id:\"1f73d831ed962b582ee1a299328f3524f265e39afc719844d3c9fe761cd84cc4\" id:\"1f73d831ed962b582ee1a299328f3524f265e39afc719844d3c9fe761cd84cc4\" pid:3390 exited_at:{seconds:1748367768 nanos:373524282}" May 27 17:42:48.374780 containerd[1563]: time="2025-05-27T17:42:48.374686005Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f73d831ed962b582ee1a299328f3524f265e39afc719844d3c9fe761cd84cc4\" id:\"1f73d831ed962b582ee1a299328f3524f265e39afc719844d3c9fe761cd84cc4\" pid:3390 exited_at:{seconds:1748367768 nanos:373524282}" May 27 17:42:48.410929 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f73d831ed962b582ee1a299328f3524f265e39afc719844d3c9fe761cd84cc4-rootfs.mount: Deactivated successfully. May 27 17:42:48.702393 kubelet[2767]: E0527 17:42:48.702256 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4vmcb" podUID="bae642d2-ac66-4c94-b401-a475f27bd04d" May 27 17:42:50.704090 kubelet[2767]: E0527 17:42:50.702144 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4vmcb" podUID="bae642d2-ac66-4c94-b401-a475f27bd04d" May 27 17:42:51.082356 containerd[1563]: time="2025-05-27T17:42:51.081794601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:42:51.083722 containerd[1563]: time="2025-05-27T17:42:51.083659788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=33665828" May 27 17:42:51.085379 containerd[1563]: time="2025-05-27T17:42:51.085319478Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:42:51.088094 containerd[1563]: time="2025-05-27T17:42:51.088030488Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:42:51.089018 containerd[1563]: time="2025-05-27T17:42:51.088847135Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 2.846364657s" May 27 17:42:51.089018 containerd[1563]: time="2025-05-27T17:42:51.088892945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 27 17:42:51.090918 containerd[1563]: time="2025-05-27T17:42:51.090674727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 27 17:42:51.114262 containerd[1563]: time="2025-05-27T17:42:51.114205276Z" level=info msg="CreateContainer within sandbox \"a0ef1b57ac996a549a3ddb6f7ef5a2769ab754a1a069ff999d419a99f3e1bad3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 27 17:42:51.124782 containerd[1563]: time="2025-05-27T17:42:51.123373018Z" level=info msg="Container 326e31f59ae2e48908835e9a849e4f9ea22791075c0d8bad33f6b1d985dcd3ab: CDI devices from CRI Config.CDIDevices: []" May 27 17:42:51.143717 containerd[1563]: time="2025-05-27T17:42:51.143656867Z" level=info msg="CreateContainer within sandbox \"a0ef1b57ac996a549a3ddb6f7ef5a2769ab754a1a069ff999d419a99f3e1bad3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"326e31f59ae2e48908835e9a849e4f9ea22791075c0d8bad33f6b1d985dcd3ab\"" May 27 17:42:51.144373 containerd[1563]: time="2025-05-27T17:42:51.144339302Z" level=info msg="StartContainer for \"326e31f59ae2e48908835e9a849e4f9ea22791075c0d8bad33f6b1d985dcd3ab\"" May 27 17:42:51.146115 containerd[1563]: time="2025-05-27T17:42:51.146028250Z" level=info msg="connecting to shim 326e31f59ae2e48908835e9a849e4f9ea22791075c0d8bad33f6b1d985dcd3ab" address="unix:///run/containerd/s/d14ad1bad37476705a629451b56799e8a5db6e395684b813cc87750fe7d7a0b6" protocol=ttrpc version=3 May 27 17:42:51.178549 systemd[1]: Started cri-containerd-326e31f59ae2e48908835e9a849e4f9ea22791075c0d8bad33f6b1d985dcd3ab.scope - libcontainer container 326e31f59ae2e48908835e9a849e4f9ea22791075c0d8bad33f6b1d985dcd3ab. May 27 17:42:51.252949 containerd[1563]: time="2025-05-27T17:42:51.252834602Z" level=info msg="StartContainer for \"326e31f59ae2e48908835e9a849e4f9ea22791075c0d8bad33f6b1d985dcd3ab\" returns successfully" May 27 17:42:51.892406 kubelet[2767]: I0527 17:42:51.892235 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-754d8cc679-pshgw" podStartSLOduration=3.043956695 podStartE2EDuration="5.892157018s" podCreationTimestamp="2025-05-27 17:42:46 +0000 UTC" firstStartedPulling="2025-05-27 17:42:48.241906418 +0000 UTC m=+24.691370844" lastFinishedPulling="2025-05-27 17:42:51.090106738 +0000 UTC m=+27.539571167" observedRunningTime="2025-05-27 17:42:51.891260077 +0000 UTC m=+28.340724517" watchObservedRunningTime="2025-05-27 17:42:51.892157018 +0000 UTC m=+28.341621458" May 27 17:42:52.702309 kubelet[2767]: E0527 17:42:52.702115 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4vmcb" podUID="bae642d2-ac66-4c94-b401-a475f27bd04d" May 27 17:42:52.876874 kubelet[2767]: I0527 17:42:52.876834 2767 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:42:54.285380 containerd[1563]: time="2025-05-27T17:42:54.285324446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:42:54.286869 containerd[1563]: time="2025-05-27T17:42:54.286799632Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 27 17:42:54.288435 containerd[1563]: time="2025-05-27T17:42:54.288385568Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:42:54.291655 containerd[1563]: time="2025-05-27T17:42:54.291573213Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:42:54.293305 containerd[1563]: time="2025-05-27T17:42:54.292718387Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 3.202004372s" May 27 17:42:54.293305 containerd[1563]: time="2025-05-27T17:42:54.292761430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 27 17:42:54.296644 containerd[1563]: time="2025-05-27T17:42:54.296610667Z" level=info msg="CreateContainer within sandbox \"cfbaf2cd85d6ba9592ac3d7fd7adf75f1c60a8d66000c455f025d70d2968b795\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 27 17:42:54.312208 containerd[1563]: time="2025-05-27T17:42:54.310749099Z" level=info msg="Container 502d289e966949daedf2125ff31d70290c3df4dd375659f8346c928b38567c69: CDI devices from CRI Config.CDIDevices: []" May 27 17:42:54.324689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount978042891.mount: Deactivated successfully. May 27 17:42:54.329324 containerd[1563]: time="2025-05-27T17:42:54.329245676Z" level=info msg="CreateContainer within sandbox \"cfbaf2cd85d6ba9592ac3d7fd7adf75f1c60a8d66000c455f025d70d2968b795\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"502d289e966949daedf2125ff31d70290c3df4dd375659f8346c928b38567c69\"" May 27 17:42:54.330163 containerd[1563]: time="2025-05-27T17:42:54.330049207Z" level=info msg="StartContainer for \"502d289e966949daedf2125ff31d70290c3df4dd375659f8346c928b38567c69\"" May 27 17:42:54.332312 containerd[1563]: time="2025-05-27T17:42:54.332217967Z" level=info msg="connecting to shim 502d289e966949daedf2125ff31d70290c3df4dd375659f8346c928b38567c69" address="unix:///run/containerd/s/3d9cc141c0ea4dfbb9dac23b6d706a4f3c81c2f658ebf4f7efd1640a145c48e2" protocol=ttrpc version=3 May 27 17:42:54.365519 systemd[1]: Started cri-containerd-502d289e966949daedf2125ff31d70290c3df4dd375659f8346c928b38567c69.scope - libcontainer container 502d289e966949daedf2125ff31d70290c3df4dd375659f8346c928b38567c69. May 27 17:42:54.432501 containerd[1563]: time="2025-05-27T17:42:54.432446026Z" level=info msg="StartContainer for \"502d289e966949daedf2125ff31d70290c3df4dd375659f8346c928b38567c69\" returns successfully" May 27 17:42:54.702070 kubelet[2767]: E0527 17:42:54.702002 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4vmcb" podUID="bae642d2-ac66-4c94-b401-a475f27bd04d" May 27 17:42:55.412806 systemd[1]: cri-containerd-502d289e966949daedf2125ff31d70290c3df4dd375659f8346c928b38567c69.scope: Deactivated successfully. May 27 17:42:55.413927 systemd[1]: cri-containerd-502d289e966949daedf2125ff31d70290c3df4dd375659f8346c928b38567c69.scope: Consumed 646ms CPU time, 191M memory peak, 170.9M written to disk. May 27 17:42:55.417301 containerd[1563]: time="2025-05-27T17:42:55.417028585Z" level=info msg="received exit event container_id:\"502d289e966949daedf2125ff31d70290c3df4dd375659f8346c928b38567c69\" id:\"502d289e966949daedf2125ff31d70290c3df4dd375659f8346c928b38567c69\" pid:3493 exited_at:{seconds:1748367775 nanos:416771302}" May 27 17:42:55.417875 containerd[1563]: time="2025-05-27T17:42:55.417809457Z" level=info msg="TaskExit event in podsandbox handler container_id:\"502d289e966949daedf2125ff31d70290c3df4dd375659f8346c928b38567c69\" id:\"502d289e966949daedf2125ff31d70290c3df4dd375659f8346c928b38567c69\" pid:3493 exited_at:{seconds:1748367775 nanos:416771302}" May 27 17:42:55.452232 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-502d289e966949daedf2125ff31d70290c3df4dd375659f8346c928b38567c69-rootfs.mount: Deactivated successfully. May 27 17:42:55.476984 kubelet[2767]: I0527 17:42:55.476876 2767 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 17:42:55.534810 systemd[1]: Created slice kubepods-besteffort-poda0b18132_1e2b_442d_9188_edbebd10f7b5.slice - libcontainer container kubepods-besteffort-poda0b18132_1e2b_442d_9188_edbebd10f7b5.slice. May 27 17:42:55.569161 systemd[1]: Created slice kubepods-burstable-pod6221b71f_4031_4dc0_9788_8de3f6d71ea2.slice - libcontainer container kubepods-burstable-pod6221b71f_4031_4dc0_9788_8de3f6d71ea2.slice. May 27 17:42:55.578070 kubelet[2767]: W0527 17:42:55.578033 2767 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal' and this object May 27 17:42:55.578267 kubelet[2767]: E0527 17:42:55.578087 2767 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal' and this object" logger="UnhandledError" May 27 17:42:55.585295 kubelet[2767]: I0527 17:42:55.585231 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwfwr\" (UniqueName: \"kubernetes.io/projected/6221b71f-4031-4dc0-9788-8de3f6d71ea2-kube-api-access-rwfwr\") pod \"coredns-668d6bf9bc-5t5kd\" (UID: \"6221b71f-4031-4dc0-9788-8de3f6d71ea2\") " pod="kube-system/coredns-668d6bf9bc-5t5kd" May 27 17:42:55.586697 kubelet[2767]: I0527 17:42:55.585351 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t52xf\" (UniqueName: \"kubernetes.io/projected/a0b18132-1e2b-442d-9188-edbebd10f7b5-kube-api-access-t52xf\") pod \"calico-kube-controllers-58b97d7dbc-z4gdm\" (UID: \"a0b18132-1e2b-442d-9188-edbebd10f7b5\") " pod="calico-system/calico-kube-controllers-58b97d7dbc-z4gdm" May 27 17:42:55.586697 kubelet[2767]: I0527 17:42:55.585391 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0b18132-1e2b-442d-9188-edbebd10f7b5-tigera-ca-bundle\") pod \"calico-kube-controllers-58b97d7dbc-z4gdm\" (UID: \"a0b18132-1e2b-442d-9188-edbebd10f7b5\") " pod="calico-system/calico-kube-controllers-58b97d7dbc-z4gdm" May 27 17:42:55.586697 kubelet[2767]: I0527 17:42:55.585433 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6221b71f-4031-4dc0-9788-8de3f6d71ea2-config-volume\") pod \"coredns-668d6bf9bc-5t5kd\" (UID: \"6221b71f-4031-4dc0-9788-8de3f6d71ea2\") " pod="kube-system/coredns-668d6bf9bc-5t5kd" May 27 17:42:55.586697 kubelet[2767]: W0527 17:42:55.586232 2767 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal' and this object May 27 17:42:55.588743 kubelet[2767]: W0527 17:42:55.587393 2767 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal' and this object May 27 17:42:55.588743 kubelet[2767]: E0527 17:42:55.587437 2767 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal' and this object" logger="UnhandledError" May 27 17:42:55.590931 kubelet[2767]: E0527 17:42:55.590210 2767 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal' and this object" logger="UnhandledError" May 27 17:42:55.595745 systemd[1]: Created slice kubepods-burstable-podf557bed3_e1e9_40fb_854c_2987c8546de4.slice - libcontainer container kubepods-burstable-podf557bed3_e1e9_40fb_854c_2987c8546de4.slice. May 27 17:42:55.614842 systemd[1]: Created slice kubepods-besteffort-pod381e8591_eecc_4ffe_bdb4_2226af7b4c96.slice - libcontainer container kubepods-besteffort-pod381e8591_eecc_4ffe_bdb4_2226af7b4c96.slice. May 27 17:42:55.635157 systemd[1]: Created slice kubepods-besteffort-pod57105338_de16_4b1a_8b06_4fef5b6ff137.slice - libcontainer container kubepods-besteffort-pod57105338_de16_4b1a_8b06_4fef5b6ff137.slice. May 27 17:42:55.653853 systemd[1]: Created slice kubepods-besteffort-pod07e3324d_7a53_452e_9bd8_97f95fe9ca12.slice - libcontainer container kubepods-besteffort-pod07e3324d_7a53_452e_9bd8_97f95fe9ca12.slice. May 27 17:42:55.665271 systemd[1]: Created slice kubepods-besteffort-pod17a31152_9656_4e7b_b59d_6595424c4d7e.slice - libcontainer container kubepods-besteffort-pod17a31152_9656_4e7b_b59d_6595424c4d7e.slice. May 27 17:42:55.686436 kubelet[2767]: I0527 17:42:55.686395 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-828bq\" (UniqueName: \"kubernetes.io/projected/17a31152-9656-4e7b-b59d-6595424c4d7e-kube-api-access-828bq\") pod \"goldmane-78d55f7ddc-hnfkh\" (UID: \"17a31152-9656-4e7b-b59d-6595424c4d7e\") " pod="calico-system/goldmane-78d55f7ddc-hnfkh" May 27 17:42:55.698173 kubelet[2767]: I0527 17:42:55.686656 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f557bed3-e1e9-40fb-854c-2987c8546de4-config-volume\") pod \"coredns-668d6bf9bc-kj59m\" (UID: \"f557bed3-e1e9-40fb-854c-2987c8546de4\") " pod="kube-system/coredns-668d6bf9bc-kj59m" May 27 17:42:55.698173 kubelet[2767]: I0527 17:42:55.686693 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07e3324d-7a53-452e-9bd8-97f95fe9ca12-whisker-ca-bundle\") pod \"whisker-74b56f8c7d-df9sd\" (UID: \"07e3324d-7a53-452e-9bd8-97f95fe9ca12\") " pod="calico-system/whisker-74b56f8c7d-df9sd" May 27 17:42:55.698173 kubelet[2767]: I0527 17:42:55.686742 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17a31152-9656-4e7b-b59d-6595424c4d7e-config\") pod \"goldmane-78d55f7ddc-hnfkh\" (UID: \"17a31152-9656-4e7b-b59d-6595424c4d7e\") " pod="calico-system/goldmane-78d55f7ddc-hnfkh" May 27 17:42:55.698173 kubelet[2767]: I0527 17:42:55.686772 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17a31152-9656-4e7b-b59d-6595424c4d7e-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-hnfkh\" (UID: \"17a31152-9656-4e7b-b59d-6595424c4d7e\") " pod="calico-system/goldmane-78d55f7ddc-hnfkh" May 27 17:42:55.698173 kubelet[2767]: I0527 17:42:55.686853 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9s68\" (UniqueName: \"kubernetes.io/projected/07e3324d-7a53-452e-9bd8-97f95fe9ca12-kube-api-access-b9s68\") pod \"whisker-74b56f8c7d-df9sd\" (UID: \"07e3324d-7a53-452e-9bd8-97f95fe9ca12\") " pod="calico-system/whisker-74b56f8c7d-df9sd" May 27 17:42:55.698627 kubelet[2767]: I0527 17:42:55.686911 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/381e8591-eecc-4ffe-bdb4-2226af7b4c96-calico-apiserver-certs\") pod \"calico-apiserver-7897c49469-5lbz5\" (UID: \"381e8591-eecc-4ffe-bdb4-2226af7b4c96\") " pod="calico-apiserver/calico-apiserver-7897c49469-5lbz5" May 27 17:42:55.698627 kubelet[2767]: I0527 17:42:55.686940 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/17a31152-9656-4e7b-b59d-6595424c4d7e-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-hnfkh\" (UID: \"17a31152-9656-4e7b-b59d-6595424c4d7e\") " pod="calico-system/goldmane-78d55f7ddc-hnfkh" May 27 17:42:55.698627 kubelet[2767]: I0527 17:42:55.687041 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbz62\" (UniqueName: \"kubernetes.io/projected/f557bed3-e1e9-40fb-854c-2987c8546de4-kube-api-access-bbz62\") pod \"coredns-668d6bf9bc-kj59m\" (UID: \"f557bed3-e1e9-40fb-854c-2987c8546de4\") " pod="kube-system/coredns-668d6bf9bc-kj59m" May 27 17:42:55.698627 kubelet[2767]: I0527 17:42:55.687072 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk2fh\" (UniqueName: \"kubernetes.io/projected/57105338-de16-4b1a-8b06-4fef5b6ff137-kube-api-access-lk2fh\") pod \"calico-apiserver-7897c49469-b54nx\" (UID: \"57105338-de16-4b1a-8b06-4fef5b6ff137\") " pod="calico-apiserver/calico-apiserver-7897c49469-b54nx" May 27 17:42:55.698627 kubelet[2767]: I0527 17:42:55.687102 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vr89\" (UniqueName: \"kubernetes.io/projected/381e8591-eecc-4ffe-bdb4-2226af7b4c96-kube-api-access-6vr89\") pod \"calico-apiserver-7897c49469-5lbz5\" (UID: \"381e8591-eecc-4ffe-bdb4-2226af7b4c96\") " pod="calico-apiserver/calico-apiserver-7897c49469-5lbz5" May 27 17:42:55.698905 kubelet[2767]: I0527 17:42:55.687154 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/07e3324d-7a53-452e-9bd8-97f95fe9ca12-whisker-backend-key-pair\") pod \"whisker-74b56f8c7d-df9sd\" (UID: \"07e3324d-7a53-452e-9bd8-97f95fe9ca12\") " pod="calico-system/whisker-74b56f8c7d-df9sd" May 27 17:42:55.698905 kubelet[2767]: I0527 17:42:55.687181 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/57105338-de16-4b1a-8b06-4fef5b6ff137-calico-apiserver-certs\") pod \"calico-apiserver-7897c49469-b54nx\" (UID: \"57105338-de16-4b1a-8b06-4fef5b6ff137\") " pod="calico-apiserver/calico-apiserver-7897c49469-b54nx" May 27 17:42:55.844405 containerd[1563]: time="2025-05-27T17:42:55.843970629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58b97d7dbc-z4gdm,Uid:a0b18132-1e2b-442d-9188-edbebd10f7b5,Namespace:calico-system,Attempt:0,}" May 27 17:42:55.961083 containerd[1563]: time="2025-05-27T17:42:55.960935342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74b56f8c7d-df9sd,Uid:07e3324d-7a53-452e-9bd8-97f95fe9ca12,Namespace:calico-system,Attempt:0,}" May 27 17:42:55.971619 containerd[1563]: time="2025-05-27T17:42:55.971567683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-hnfkh,Uid:17a31152-9656-4e7b-b59d-6595424c4d7e,Namespace:calico-system,Attempt:0,}" May 27 17:42:56.482297 containerd[1563]: time="2025-05-27T17:42:56.482209716Z" level=error msg="Failed to destroy network for sandbox \"d354ed66ab24be9ce6eca5a930f3948d47abbf9ccf46da3fd92b6d6b2f80d6d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:42:56.485560 containerd[1563]: time="2025-05-27T17:42:56.485425580Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58b97d7dbc-z4gdm,Uid:a0b18132-1e2b-442d-9188-edbebd10f7b5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d354ed66ab24be9ce6eca5a930f3948d47abbf9ccf46da3fd92b6d6b2f80d6d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:42:56.488006 kubelet[2767]: E0527 17:42:56.487678 2767 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d354ed66ab24be9ce6eca5a930f3948d47abbf9ccf46da3fd92b6d6b2f80d6d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:42:56.488006 kubelet[2767]: E0527 17:42:56.487783 2767 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d354ed66ab24be9ce6eca5a930f3948d47abbf9ccf46da3fd92b6d6b2f80d6d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58b97d7dbc-z4gdm" May 27 17:42:56.488006 kubelet[2767]: E0527 17:42:56.487820 2767 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d354ed66ab24be9ce6eca5a930f3948d47abbf9ccf46da3fd92b6d6b2f80d6d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58b97d7dbc-z4gdm" May 27 17:42:56.488725 kubelet[2767]: E0527 17:42:56.487879 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-58b97d7dbc-z4gdm_calico-system(a0b18132-1e2b-442d-9188-edbebd10f7b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-58b97d7dbc-z4gdm_calico-system(a0b18132-1e2b-442d-9188-edbebd10f7b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d354ed66ab24be9ce6eca5a930f3948d47abbf9ccf46da3fd92b6d6b2f80d6d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58b97d7dbc-z4gdm" podUID="a0b18132-1e2b-442d-9188-edbebd10f7b5" May 27 17:42:56.492026 systemd[1]: run-netns-cni\x2d80a90dcd\x2d0e49\x2d357b\x2db519\x2d62bd24b00d9d.mount: Deactivated successfully. May 27 17:42:56.514232 containerd[1563]: time="2025-05-27T17:42:56.514032301Z" level=error msg="Failed to destroy network for sandbox \"ed4833f458f96ba2af036face77d1ee582115f6fbffb02b71f38e81e4828a2a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:42:56.516767 containerd[1563]: time="2025-05-27T17:42:56.516681759Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-hnfkh,Uid:17a31152-9656-4e7b-b59d-6595424c4d7e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed4833f458f96ba2af036face77d1ee582115f6fbffb02b71f38e81e4828a2a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:42:56.518481 kubelet[2767]: E0527 17:42:56.517323 2767 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed4833f458f96ba2af036face77d1ee582115f6fbffb02b71f38e81e4828a2a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:42:56.518481 kubelet[2767]: E0527 17:42:56.517402 2767 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed4833f458f96ba2af036face77d1ee582115f6fbffb02b71f38e81e4828a2a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-hnfkh" May 27 17:42:56.518481 kubelet[2767]: E0527 17:42:56.517437 2767 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed4833f458f96ba2af036face77d1ee582115f6fbffb02b71f38e81e4828a2a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-hnfkh" May 27 17:42:56.518720 kubelet[2767]: E0527 17:42:56.517505 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-hnfkh_calico-system(17a31152-9656-4e7b-b59d-6595424c4d7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-hnfkh_calico-system(17a31152-9656-4e7b-b59d-6595424c4d7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed4833f458f96ba2af036face77d1ee582115f6fbffb02b71f38e81e4828a2a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-hnfkh" podUID="17a31152-9656-4e7b-b59d-6595424c4d7e" May 27 17:42:56.521445 systemd[1]: run-netns-cni\x2d765a88d6\x2dcf6e\x2d3fc3\x2d96c5\x2d0b4a5360d887.mount: Deactivated successfully. May 27 17:42:56.527351 containerd[1563]: time="2025-05-27T17:42:56.526770157Z" level=error msg="Failed to destroy network for sandbox \"a2985ee3c80d4f765978e9af0f2e0dda31ca970f442b3e965b0c6f493ed44351\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:42:56.534361 systemd[1]: run-netns-cni\x2d407716f8\x2d4ca7\x2dfbd1\x2decf1\x2de1f3e3c9eaa6.mount: Deactivated successfully. May 27 17:42:56.537156 containerd[1563]: time="2025-05-27T17:42:56.535060137Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74b56f8c7d-df9sd,Uid:07e3324d-7a53-452e-9bd8-97f95fe9ca12,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2985ee3c80d4f765978e9af0f2e0dda31ca970f442b3e965b0c6f493ed44351\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:42:56.542209 kubelet[2767]: E0527 17:42:56.542165 2767 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2985ee3c80d4f765978e9af0f2e0dda31ca970f442b3e965b0c6f493ed44351\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:42:56.542441 kubelet[2767]: E0527 17:42:56.542407 2767 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2985ee3c80d4f765978e9af0f2e0dda31ca970f442b3e965b0c6f493ed44351\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-74b56f8c7d-df9sd" May 27 17:42:56.542594 kubelet[2767]: E0527 17:42:56.542565 2767 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2985ee3c80d4f765978e9af0f2e0dda31ca970f442b3e965b0c6f493ed44351\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-74b56f8c7d-df9sd" May 27 17:42:56.542750 kubelet[2767]: E0527 17:42:56.542717 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-74b56f8c7d-df9sd_calico-system(07e3324d-7a53-452e-9bd8-97f95fe9ca12)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-74b56f8c7d-df9sd_calico-system(07e3324d-7a53-452e-9bd8-97f95fe9ca12)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2985ee3c80d4f765978e9af0f2e0dda31ca970f442b3e965b0c6f493ed44351\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-74b56f8c7d-df9sd" podUID="07e3324d-7a53-452e-9bd8-97f95fe9ca12" May 27 17:42:56.689303 kubelet[2767]: E0527 17:42:56.689244 2767 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 27 17:42:56.689506 kubelet[2767]: E0527 17:42:56.689390 2767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6221b71f-4031-4dc0-9788-8de3f6d71ea2-config-volume podName:6221b71f-4031-4dc0-9788-8de3f6d71ea2 nodeName:}" failed. No retries permitted until 2025-05-27 17:42:57.189363486 +0000 UTC m=+33.638827922 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6221b71f-4031-4dc0-9788-8de3f6d71ea2-config-volume") pod "coredns-668d6bf9bc-5t5kd" (UID: "6221b71f-4031-4dc0-9788-8de3f6d71ea2") : failed to sync configmap cache: timed out waiting for the condition May 27 17:42:56.710552 systemd[1]: Created slice kubepods-besteffort-podbae642d2_ac66_4c94_b401_a475f27bd04d.slice - libcontainer container kubepods-besteffort-podbae642d2_ac66_4c94_b401_a475f27bd04d.slice. May 27 17:42:56.714102 containerd[1563]: time="2025-05-27T17:42:56.714012721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4vmcb,Uid:bae642d2-ac66-4c94-b401-a475f27bd04d,Namespace:calico-system,Attempt:0,}" May 27 17:42:56.790140 kubelet[2767]: E0527 17:42:56.789972 2767 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 27 17:42:56.790595 kubelet[2767]: E0527 17:42:56.790448 2767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f557bed3-e1e9-40fb-854c-2987c8546de4-config-volume podName:f557bed3-e1e9-40fb-854c-2987c8546de4 nodeName:}" failed. No retries permitted until 2025-05-27 17:42:57.290393955 +0000 UTC m=+33.739858384 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f557bed3-e1e9-40fb-854c-2987c8546de4-config-volume") pod "coredns-668d6bf9bc-kj59m" (UID: "f557bed3-e1e9-40fb-854c-2987c8546de4") : failed to sync configmap cache: timed out waiting for the condition May 27 17:42:56.794716 containerd[1563]: time="2025-05-27T17:42:56.794663258Z" level=error msg="Failed to destroy network for sandbox \"11aa8522f1acdfba6151c8ece24cc3bb6d3a66390f0231950801173e937bd6c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:42:56.796979 containerd[1563]: time="2025-05-27T17:42:56.796785503Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4vmcb,Uid:bae642d2-ac66-4c94-b401-a475f27bd04d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"11aa8522f1acdfba6151c8ece24cc3bb6d3a66390f0231950801173e937bd6c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:42:56.797272 kubelet[2767]: E0527 17:42:56.797227 2767 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11aa8522f1acdfba6151c8ece24cc3bb6d3a66390f0231950801173e937bd6c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:42:56.797532 kubelet[2767]: E0527 17:42:56.797484 2767 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11aa8522f1acdfba6151c8ece24cc3bb6d3a66390f0231950801173e937bd6c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4vmcb" May 27 17:42:56.797640 kubelet[2767]: E0527 17:42:56.797528 2767 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11aa8522f1acdfba6151c8ece24cc3bb6d3a66390f0231950801173e937bd6c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4vmcb" May 27 17:42:56.797640 kubelet[2767]: E0527 17:42:56.797594 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4vmcb_calico-system(bae642d2-ac66-4c94-b401-a475f27bd04d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4vmcb_calico-system(bae642d2-ac66-4c94-b401-a475f27bd04d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11aa8522f1acdfba6151c8ece24cc3bb6d3a66390f0231950801173e937bd6c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4vmcb" podUID="bae642d2-ac66-4c94-b401-a475f27bd04d" May 27 17:42:56.829300 containerd[1563]: time="2025-05-27T17:42:56.829241368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7897c49469-5lbz5,Uid:381e8591-eecc-4ffe-bdb4-2226af7b4c96,Namespace:calico-apiserver,Attempt:0,}" May 27 17:42:56.846388 containerd[1563]: time="2025-05-27T17:42:56.846343122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7897c49469-b54nx,Uid:57105338-de16-4b1a-8b06-4fef5b6ff137,Namespace:calico-apiserver,Attempt:0,}" May 27 17:42:56.912330 containerd[1563]: time="2025-05-27T17:42:56.911965859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 27 17:42:56.956129 containerd[1563]: time="2025-05-27T17:42:56.956057862Z" level=error msg="Failed to destroy network for sandbox \"d6a978f9ff28cfbae641706018b7971cd27801407b12d66b73b7d05d684f7bb1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:42:56.961767 containerd[1563]: time="2025-05-27T17:42:56.961497126Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7897c49469-b54nx,Uid:57105338-de16-4b1a-8b06-4fef5b6ff137,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6a978f9ff28cfbae641706018b7971cd27801407b12d66b73b7d05d684f7bb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:42:56.963474 kubelet[2767]: E0527 17:42:56.963423 2767 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6a978f9ff28cfbae641706018b7971cd27801407b12d66b73b7d05d684f7bb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:42:56.964464 kubelet[2767]: E0527 17:42:56.963702 2767 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6a978f9ff28cfbae641706018b7971cd27801407b12d66b73b7d05d684f7bb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7897c49469-b54nx" May 27 17:42:56.964464 kubelet[2767]: E0527 17:42:56.963750 2767 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6a978f9ff28cfbae641706018b7971cd27801407b12d66b73b7d05d684f7bb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7897c49469-b54nx" May 27 17:42:56.964464 kubelet[2767]: E0527 17:42:56.963812 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7897c49469-b54nx_calico-apiserver(57105338-de16-4b1a-8b06-4fef5b6ff137)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7897c49469-b54nx_calico-apiserver(57105338-de16-4b1a-8b06-4fef5b6ff137)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6a978f9ff28cfbae641706018b7971cd27801407b12d66b73b7d05d684f7bb1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7897c49469-b54nx" podUID="57105338-de16-4b1a-8b06-4fef5b6ff137" May 27 17:42:56.969673 containerd[1563]: time="2025-05-27T17:42:56.969611491Z" level=error msg="Failed to destroy network for sandbox \"63ddd1c115f46cdb6a4443ca98f7abe1414d6531b39cf6a5bb2edcdb19828fdc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:42:56.971475 containerd[1563]: time="2025-05-27T17:42:56.971412468Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7897c49469-5lbz5,Uid:381e8591-eecc-4ffe-bdb4-2226af7b4c96,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"63ddd1c115f46cdb6a4443ca98f7abe1414d6531b39cf6a5bb2edcdb19828fdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:42:56.971773 kubelet[2767]: E0527 17:42:56.971735 2767 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63ddd1c115f46cdb6a4443ca98f7abe1414d6531b39cf6a5bb2edcdb19828fdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:42:56.971909 kubelet[2767]: E0527 17:42:56.971800 2767 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63ddd1c115f46cdb6a4443ca98f7abe1414d6531b39cf6a5bb2edcdb19828fdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7897c49469-5lbz5" May 27 17:42:56.971909 kubelet[2767]: E0527 17:42:56.971842 2767 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63ddd1c115f46cdb6a4443ca98f7abe1414d6531b39cf6a5bb2edcdb19828fdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7897c49469-5lbz5" May 27 17:42:56.972145 kubelet[2767]: E0527 17:42:56.971908 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7897c49469-5lbz5_calico-apiserver(381e8591-eecc-4ffe-bdb4-2226af7b4c96)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7897c49469-5lbz5_calico-apiserver(381e8591-eecc-4ffe-bdb4-2226af7b4c96)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63ddd1c115f46cdb6a4443ca98f7abe1414d6531b39cf6a5bb2edcdb19828fdc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7897c49469-5lbz5" podUID="381e8591-eecc-4ffe-bdb4-2226af7b4c96" May 27 17:42:57.376938 containerd[1563]: time="2025-05-27T17:42:57.376874902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5t5kd,Uid:6221b71f-4031-4dc0-9788-8de3f6d71ea2,Namespace:kube-system,Attempt:0,}" May 27 17:42:57.406488 containerd[1563]: time="2025-05-27T17:42:57.406161323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kj59m,Uid:f557bed3-e1e9-40fb-854c-2987c8546de4,Namespace:kube-system,Attempt:0,}" May 27 17:42:57.476942 containerd[1563]: time="2025-05-27T17:42:57.476880879Z" level=error msg="Failed to destroy network for sandbox \"5eb3918a10adab9c9f70b9dc0387a0fcac9bbe40536b0bfa27a1aeff7a2c1bfc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:42:57.480112 containerd[1563]: time="2025-05-27T17:42:57.479974956Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5t5kd,Uid:6221b71f-4031-4dc0-9788-8de3f6d71ea2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eb3918a10adab9c9f70b9dc0387a0fcac9bbe40536b0bfa27a1aeff7a2c1bfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:42:57.480779 kubelet[2767]: E0527 17:42:57.480718 2767 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eb3918a10adab9c9f70b9dc0387a0fcac9bbe40536b0bfa27a1aeff7a2c1bfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:42:57.481190 kubelet[2767]: E0527 17:42:57.481048 2767 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eb3918a10adab9c9f70b9dc0387a0fcac9bbe40536b0bfa27a1aeff7a2c1bfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5t5kd" May 27 17:42:57.481190 kubelet[2767]: E0527 17:42:57.481127 2767 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eb3918a10adab9c9f70b9dc0387a0fcac9bbe40536b0bfa27a1aeff7a2c1bfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5t5kd" May 27 17:42:57.481830 kubelet[2767]: E0527 17:42:57.481457 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-5t5kd_kube-system(6221b71f-4031-4dc0-9788-8de3f6d71ea2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-5t5kd_kube-system(6221b71f-4031-4dc0-9788-8de3f6d71ea2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5eb3918a10adab9c9f70b9dc0387a0fcac9bbe40536b0bfa27a1aeff7a2c1bfc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5t5kd" podUID="6221b71f-4031-4dc0-9788-8de3f6d71ea2" May 27 17:42:57.484795 systemd[1]: run-netns-cni\x2d5ecd5cb9\x2de152\x2d057b\x2d9012\x2d5e78e68692e4.mount: Deactivated successfully. May 27 17:42:57.526824 containerd[1563]: time="2025-05-27T17:42:57.526764375Z" level=error msg="Failed to destroy network for sandbox \"f2794a1737e9cabfe77dadff152d2aed7ccd060a7ef372390c707ac85e477459\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:42:57.532594 containerd[1563]: time="2025-05-27T17:42:57.532476665Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kj59m,Uid:f557bed3-e1e9-40fb-854c-2987c8546de4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2794a1737e9cabfe77dadff152d2aed7ccd060a7ef372390c707ac85e477459\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:42:57.533392 systemd[1]: run-netns-cni\x2ddb1c71b3\x2de0bd\x2d3cdc\x2d0ba3\x2d4817e33b4dca.mount: Deactivated successfully. May 27 17:42:57.534357 kubelet[2767]: E0527 17:42:57.533932 2767 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2794a1737e9cabfe77dadff152d2aed7ccd060a7ef372390c707ac85e477459\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:42:57.534357 kubelet[2767]: E0527 17:42:57.534012 2767 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2794a1737e9cabfe77dadff152d2aed7ccd060a7ef372390c707ac85e477459\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-kj59m" May 27 17:42:57.534357 kubelet[2767]: E0527 17:42:57.534048 2767 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2794a1737e9cabfe77dadff152d2aed7ccd060a7ef372390c707ac85e477459\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-kj59m" May 27 17:42:57.535709 kubelet[2767]: E0527 17:42:57.534113 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-kj59m_kube-system(f557bed3-e1e9-40fb-854c-2987c8546de4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-kj59m_kube-system(f557bed3-e1e9-40fb-854c-2987c8546de4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2794a1737e9cabfe77dadff152d2aed7ccd060a7ef372390c707ac85e477459\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-kj59m" podUID="f557bed3-e1e9-40fb-854c-2987c8546de4" May 27 17:43:03.311478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3012790893.mount: Deactivated successfully. May 27 17:43:03.351228 containerd[1563]: time="2025-05-27T17:43:03.351161735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:43:03.352486 containerd[1563]: time="2025-05-27T17:43:03.352430967Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 27 17:43:03.353785 containerd[1563]: time="2025-05-27T17:43:03.353716485Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:43:03.356299 containerd[1563]: time="2025-05-27T17:43:03.356214827Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:43:03.357204 containerd[1563]: time="2025-05-27T17:43:03.357036709Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 6.445017494s" May 27 17:43:03.357204 containerd[1563]: time="2025-05-27T17:43:03.357081964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 27 17:43:03.379708 containerd[1563]: time="2025-05-27T17:43:03.377912334Z" level=info msg="CreateContainer within sandbox \"cfbaf2cd85d6ba9592ac3d7fd7adf75f1c60a8d66000c455f025d70d2968b795\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 27 17:43:03.391521 containerd[1563]: time="2025-05-27T17:43:03.391465722Z" level=info msg="Container bb8861621cfbdd6d0bb1e34f98427dbf66a1bf73e1f6d6fd375e140dc07d81c7: CDI devices from CRI Config.CDIDevices: []" May 27 17:43:03.407292 containerd[1563]: time="2025-05-27T17:43:03.407224265Z" level=info msg="CreateContainer within sandbox \"cfbaf2cd85d6ba9592ac3d7fd7adf75f1c60a8d66000c455f025d70d2968b795\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bb8861621cfbdd6d0bb1e34f98427dbf66a1bf73e1f6d6fd375e140dc07d81c7\"" May 27 17:43:03.408228 containerd[1563]: time="2025-05-27T17:43:03.408195873Z" level=info msg="StartContainer for \"bb8861621cfbdd6d0bb1e34f98427dbf66a1bf73e1f6d6fd375e140dc07d81c7\"" May 27 17:43:03.410747 containerd[1563]: time="2025-05-27T17:43:03.410696040Z" level=info msg="connecting to shim bb8861621cfbdd6d0bb1e34f98427dbf66a1bf73e1f6d6fd375e140dc07d81c7" address="unix:///run/containerd/s/3d9cc141c0ea4dfbb9dac23b6d706a4f3c81c2f658ebf4f7efd1640a145c48e2" protocol=ttrpc version=3 May 27 17:43:03.438505 systemd[1]: Started cri-containerd-bb8861621cfbdd6d0bb1e34f98427dbf66a1bf73e1f6d6fd375e140dc07d81c7.scope - libcontainer container bb8861621cfbdd6d0bb1e34f98427dbf66a1bf73e1f6d6fd375e140dc07d81c7. May 27 17:43:03.502596 containerd[1563]: time="2025-05-27T17:43:03.502434190Z" level=info msg="StartContainer for \"bb8861621cfbdd6d0bb1e34f98427dbf66a1bf73e1f6d6fd375e140dc07d81c7\" returns successfully" May 27 17:43:03.627787 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 27 17:43:03.627966 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 27 17:43:03.853529 kubelet[2767]: I0527 17:43:03.853475 2767 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07e3324d-7a53-452e-9bd8-97f95fe9ca12-whisker-ca-bundle\") pod \"07e3324d-7a53-452e-9bd8-97f95fe9ca12\" (UID: \"07e3324d-7a53-452e-9bd8-97f95fe9ca12\") " May 27 17:43:03.854097 kubelet[2767]: I0527 17:43:03.853568 2767 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9s68\" (UniqueName: \"kubernetes.io/projected/07e3324d-7a53-452e-9bd8-97f95fe9ca12-kube-api-access-b9s68\") pod \"07e3324d-7a53-452e-9bd8-97f95fe9ca12\" (UID: \"07e3324d-7a53-452e-9bd8-97f95fe9ca12\") " May 27 17:43:03.854097 kubelet[2767]: I0527 17:43:03.853607 2767 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/07e3324d-7a53-452e-9bd8-97f95fe9ca12-whisker-backend-key-pair\") pod \"07e3324d-7a53-452e-9bd8-97f95fe9ca12\" (UID: \"07e3324d-7a53-452e-9bd8-97f95fe9ca12\") " May 27 17:43:03.854744 kubelet[2767]: I0527 17:43:03.854686 2767 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07e3324d-7a53-452e-9bd8-97f95fe9ca12-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "07e3324d-7a53-452e-9bd8-97f95fe9ca12" (UID: "07e3324d-7a53-452e-9bd8-97f95fe9ca12"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 17:43:03.859568 kubelet[2767]: I0527 17:43:03.859526 2767 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e3324d-7a53-452e-9bd8-97f95fe9ca12-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "07e3324d-7a53-452e-9bd8-97f95fe9ca12" (UID: "07e3324d-7a53-452e-9bd8-97f95fe9ca12"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 17:43:03.860455 kubelet[2767]: I0527 17:43:03.860396 2767 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07e3324d-7a53-452e-9bd8-97f95fe9ca12-kube-api-access-b9s68" (OuterVolumeSpecName: "kube-api-access-b9s68") pod "07e3324d-7a53-452e-9bd8-97f95fe9ca12" (UID: "07e3324d-7a53-452e-9bd8-97f95fe9ca12"). InnerVolumeSpecName "kube-api-access-b9s68". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 17:43:03.948229 systemd[1]: Removed slice kubepods-besteffort-pod07e3324d_7a53_452e_9bd8_97f95fe9ca12.slice - libcontainer container kubepods-besteffort-pod07e3324d_7a53_452e_9bd8_97f95fe9ca12.slice. May 27 17:43:03.954697 kubelet[2767]: I0527 17:43:03.954656 2767 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07e3324d-7a53-452e-9bd8-97f95fe9ca12-whisker-ca-bundle\") on node \"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" DevicePath \"\"" May 27 17:43:03.954697 kubelet[2767]: I0527 17:43:03.954694 2767 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b9s68\" (UniqueName: \"kubernetes.io/projected/07e3324d-7a53-452e-9bd8-97f95fe9ca12-kube-api-access-b9s68\") on node \"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" DevicePath \"\"" May 27 17:43:03.954915 kubelet[2767]: I0527 17:43:03.954719 2767 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/07e3324d-7a53-452e-9bd8-97f95fe9ca12-whisker-backend-key-pair\") on node \"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal\" DevicePath \"\"" May 27 17:43:03.994468 kubelet[2767]: I0527 17:43:03.994356 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-57clp" podStartSLOduration=1.665652504 podStartE2EDuration="17.994333244s" podCreationTimestamp="2025-05-27 17:42:46 +0000 UTC" firstStartedPulling="2025-05-27 17:42:47.029486067 +0000 UTC m=+23.478950497" lastFinishedPulling="2025-05-27 17:43:03.358166798 +0000 UTC m=+39.807631237" observedRunningTime="2025-05-27 17:43:03.967107558 +0000 UTC m=+40.416571999" watchObservedRunningTime="2025-05-27 17:43:03.994333244 +0000 UTC m=+40.443797683" May 27 17:43:04.076031 systemd[1]: Created slice kubepods-besteffort-podc331eeb0_0298_4bf1_b9ac_6574f2928103.slice - libcontainer container kubepods-besteffort-podc331eeb0_0298_4bf1_b9ac_6574f2928103.slice. May 27 17:43:04.157096 kubelet[2767]: I0527 17:43:04.157015 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vzcd\" (UniqueName: \"kubernetes.io/projected/c331eeb0-0298-4bf1-b9ac-6574f2928103-kube-api-access-2vzcd\") pod \"whisker-775674897b-2d69w\" (UID: \"c331eeb0-0298-4bf1-b9ac-6574f2928103\") " pod="calico-system/whisker-775674897b-2d69w" May 27 17:43:04.157096 kubelet[2767]: I0527 17:43:04.157096 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c331eeb0-0298-4bf1-b9ac-6574f2928103-whisker-backend-key-pair\") pod \"whisker-775674897b-2d69w\" (UID: \"c331eeb0-0298-4bf1-b9ac-6574f2928103\") " pod="calico-system/whisker-775674897b-2d69w" May 27 17:43:04.157482 kubelet[2767]: I0527 17:43:04.157134 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c331eeb0-0298-4bf1-b9ac-6574f2928103-whisker-ca-bundle\") pod \"whisker-775674897b-2d69w\" (UID: \"c331eeb0-0298-4bf1-b9ac-6574f2928103\") " pod="calico-system/whisker-775674897b-2d69w" May 27 17:43:04.319236 systemd[1]: var-lib-kubelet-pods-07e3324d\x2d7a53\x2d452e\x2d9bd8\x2d97f95fe9ca12-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db9s68.mount: Deactivated successfully. May 27 17:43:04.319412 systemd[1]: var-lib-kubelet-pods-07e3324d\x2d7a53\x2d452e\x2d9bd8\x2d97f95fe9ca12-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 27 17:43:04.386343 containerd[1563]: time="2025-05-27T17:43:04.385922456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-775674897b-2d69w,Uid:c331eeb0-0298-4bf1-b9ac-6574f2928103,Namespace:calico-system,Attempt:0,}" May 27 17:43:04.525861 systemd-networkd[1436]: caliae188c70562: Link UP May 27 17:43:04.526688 systemd-networkd[1436]: caliae188c70562: Gained carrier May 27 17:43:04.552041 containerd[1563]: 2025-05-27 17:43:04.421 [INFO][3819] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:43:04.552041 containerd[1563]: 2025-05-27 17:43:04.434 [INFO][3819] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-whisker--775674897b--2d69w-eth0 whisker-775674897b- calico-system c331eeb0-0298-4bf1-b9ac-6574f2928103 898 0 2025-05-27 17:43:04 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:775674897b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal whisker-775674897b-2d69w eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliae188c70562 [] [] }} ContainerID="850cb1bba3757ad18ee307cc17e96005bc97269afb947f319fb641bdf3a22628" Namespace="calico-system" Pod="whisker-775674897b-2d69w" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-whisker--775674897b--2d69w-" May 27 17:43:04.552041 containerd[1563]: 2025-05-27 17:43:04.434 [INFO][3819] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="850cb1bba3757ad18ee307cc17e96005bc97269afb947f319fb641bdf3a22628" Namespace="calico-system" Pod="whisker-775674897b-2d69w" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-whisker--775674897b--2d69w-eth0" May 27 17:43:04.552041 containerd[1563]: 2025-05-27 17:43:04.471 [INFO][3830] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="850cb1bba3757ad18ee307cc17e96005bc97269afb947f319fb641bdf3a22628" HandleID="k8s-pod-network.850cb1bba3757ad18ee307cc17e96005bc97269afb947f319fb641bdf3a22628" Workload="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-whisker--775674897b--2d69w-eth0" May 27 17:43:04.552517 containerd[1563]: 2025-05-27 17:43:04.472 [INFO][3830] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="850cb1bba3757ad18ee307cc17e96005bc97269afb947f319fb641bdf3a22628" HandleID="k8s-pod-network.850cb1bba3757ad18ee307cc17e96005bc97269afb947f319fb641bdf3a22628" Workload="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-whisker--775674897b--2d69w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9240), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal", "pod":"whisker-775674897b-2d69w", "timestamp":"2025-05-27 17:43:04.471942459 +0000 UTC"}, Hostname:"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:43:04.552517 containerd[1563]: 2025-05-27 17:43:04.472 [INFO][3830] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:43:04.552517 containerd[1563]: 2025-05-27 17:43:04.472 [INFO][3830] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:43:04.552517 containerd[1563]: 2025-05-27 17:43:04.472 [INFO][3830] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal' May 27 17:43:04.552517 containerd[1563]: 2025-05-27 17:43:04.481 [INFO][3830] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.850cb1bba3757ad18ee307cc17e96005bc97269afb947f319fb641bdf3a22628" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:04.552517 containerd[1563]: 2025-05-27 17:43:04.486 [INFO][3830] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:04.552517 containerd[1563]: 2025-05-27 17:43:04.493 [INFO][3830] ipam/ipam.go 511: Trying affinity for 192.168.28.64/26 host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:04.552517 containerd[1563]: 2025-05-27 17:43:04.495 [INFO][3830] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.64/26 host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:04.552925 containerd[1563]: 2025-05-27 17:43:04.498 [INFO][3830] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.64/26 host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:04.552925 containerd[1563]: 2025-05-27 17:43:04.498 [INFO][3830] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.64/26 handle="k8s-pod-network.850cb1bba3757ad18ee307cc17e96005bc97269afb947f319fb641bdf3a22628" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:04.552925 containerd[1563]: 2025-05-27 17:43:04.499 [INFO][3830] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.850cb1bba3757ad18ee307cc17e96005bc97269afb947f319fb641bdf3a22628 May 27 17:43:04.552925 containerd[1563]: 2025-05-27 17:43:04.505 [INFO][3830] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.64/26 handle="k8s-pod-network.850cb1bba3757ad18ee307cc17e96005bc97269afb947f319fb641bdf3a22628" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:04.552925 containerd[1563]: 2025-05-27 17:43:04.511 [INFO][3830] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.65/26] block=192.168.28.64/26 handle="k8s-pod-network.850cb1bba3757ad18ee307cc17e96005bc97269afb947f319fb641bdf3a22628" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:04.552925 containerd[1563]: 2025-05-27 17:43:04.511 [INFO][3830] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.65/26] handle="k8s-pod-network.850cb1bba3757ad18ee307cc17e96005bc97269afb947f319fb641bdf3a22628" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:04.552925 containerd[1563]: 2025-05-27 17:43:04.511 [INFO][3830] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:43:04.552925 containerd[1563]: 2025-05-27 17:43:04.511 [INFO][3830] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.65/26] IPv6=[] ContainerID="850cb1bba3757ad18ee307cc17e96005bc97269afb947f319fb641bdf3a22628" HandleID="k8s-pod-network.850cb1bba3757ad18ee307cc17e96005bc97269afb947f319fb641bdf3a22628" Workload="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-whisker--775674897b--2d69w-eth0" May 27 17:43:04.553414 containerd[1563]: 2025-05-27 17:43:04.515 [INFO][3819] cni-plugin/k8s.go 418: Populated endpoint ContainerID="850cb1bba3757ad18ee307cc17e96005bc97269afb947f319fb641bdf3a22628" Namespace="calico-system" Pod="whisker-775674897b-2d69w" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-whisker--775674897b--2d69w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-whisker--775674897b--2d69w-eth0", GenerateName:"whisker-775674897b-", Namespace:"calico-system", SelfLink:"", UID:"c331eeb0-0298-4bf1-b9ac-6574f2928103", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 43, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"775674897b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal", ContainerID:"", Pod:"whisker-775674897b-2d69w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.28.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliae188c70562", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:43:04.553610 containerd[1563]: 2025-05-27 17:43:04.515 [INFO][3819] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.65/32] ContainerID="850cb1bba3757ad18ee307cc17e96005bc97269afb947f319fb641bdf3a22628" Namespace="calico-system" Pod="whisker-775674897b-2d69w" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-whisker--775674897b--2d69w-eth0" May 27 17:43:04.553610 containerd[1563]: 2025-05-27 17:43:04.515 [INFO][3819] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae188c70562 ContainerID="850cb1bba3757ad18ee307cc17e96005bc97269afb947f319fb641bdf3a22628" Namespace="calico-system" Pod="whisker-775674897b-2d69w" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-whisker--775674897b--2d69w-eth0" May 27 17:43:04.553610 containerd[1563]: 2025-05-27 17:43:04.526 [INFO][3819] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="850cb1bba3757ad18ee307cc17e96005bc97269afb947f319fb641bdf3a22628" Namespace="calico-system" Pod="whisker-775674897b-2d69w" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-whisker--775674897b--2d69w-eth0" May 27 17:43:04.553771 containerd[1563]: 2025-05-27 17:43:04.528 [INFO][3819] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="850cb1bba3757ad18ee307cc17e96005bc97269afb947f319fb641bdf3a22628" Namespace="calico-system" Pod="whisker-775674897b-2d69w" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-whisker--775674897b--2d69w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-whisker--775674897b--2d69w-eth0", GenerateName:"whisker-775674897b-", Namespace:"calico-system", SelfLink:"", UID:"c331eeb0-0298-4bf1-b9ac-6574f2928103", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 43, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"775674897b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal", ContainerID:"850cb1bba3757ad18ee307cc17e96005bc97269afb947f319fb641bdf3a22628", Pod:"whisker-775674897b-2d69w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.28.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliae188c70562", MAC:"fe:ee:7b:56:ea:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:43:04.553890 containerd[1563]: 2025-05-27 17:43:04.549 [INFO][3819] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="850cb1bba3757ad18ee307cc17e96005bc97269afb947f319fb641bdf3a22628" Namespace="calico-system" Pod="whisker-775674897b-2d69w" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-whisker--775674897b--2d69w-eth0" May 27 17:43:04.587573 containerd[1563]: time="2025-05-27T17:43:04.587514613Z" level=info msg="connecting to shim 850cb1bba3757ad18ee307cc17e96005bc97269afb947f319fb641bdf3a22628" address="unix:///run/containerd/s/9f0bcb899fff02e84ab4c42cf5c08bcb0eda6335f399e90cfd1a10bce9f799e9" namespace=k8s.io protocol=ttrpc version=3 May 27 17:43:04.630503 systemd[1]: Started cri-containerd-850cb1bba3757ad18ee307cc17e96005bc97269afb947f319fb641bdf3a22628.scope - libcontainer container 850cb1bba3757ad18ee307cc17e96005bc97269afb947f319fb641bdf3a22628. May 27 17:43:04.702507 containerd[1563]: time="2025-05-27T17:43:04.702444282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-775674897b-2d69w,Uid:c331eeb0-0298-4bf1-b9ac-6574f2928103,Namespace:calico-system,Attempt:0,} returns sandbox id \"850cb1bba3757ad18ee307cc17e96005bc97269afb947f319fb641bdf3a22628\"" May 27 17:43:04.706654 containerd[1563]: time="2025-05-27T17:43:04.706606497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 17:43:04.886666 containerd[1563]: time="2025-05-27T17:43:04.886501483Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:43:04.888514 containerd[1563]: time="2025-05-27T17:43:04.888153837Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:43:04.888514 containerd[1563]: time="2025-05-27T17:43:04.888425074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 17:43:04.889894 kubelet[2767]: E0527 17:43:04.888759 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:43:04.889894 kubelet[2767]: E0527 17:43:04.888900 2767 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:43:04.891215 kubelet[2767]: E0527 17:43:04.889420 2767 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a7966d063d7546a698b9174a0808bfab,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2vzcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-775674897b-2d69w_calico-system(c331eeb0-0298-4bf1-b9ac-6574f2928103): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:43:04.892749 containerd[1563]: time="2025-05-27T17:43:04.892712615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 17:43:05.014233 containerd[1563]: time="2025-05-27T17:43:05.014151245Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:43:05.015953 containerd[1563]: time="2025-05-27T17:43:05.015893955Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:43:05.016307 containerd[1563]: time="2025-05-27T17:43:05.015912448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 17:43:05.016377 kubelet[2767]: E0527 17:43:05.016231 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:43:05.016377 kubelet[2767]: E0527 17:43:05.016325 2767 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:43:05.016607 kubelet[2767]: E0527 17:43:05.016493 2767 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vzcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-775674897b-2d69w_calico-system(c331eeb0-0298-4bf1-b9ac-6574f2928103): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:43:05.018092 kubelet[2767]: E0527 17:43:05.018008 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-775674897b-2d69w" podUID="c331eeb0-0298-4bf1-b9ac-6574f2928103" May 27 17:43:05.705706 kubelet[2767]: I0527 17:43:05.705638 2767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07e3324d-7a53-452e-9bd8-97f95fe9ca12" path="/var/lib/kubelet/pods/07e3324d-7a53-452e-9bd8-97f95fe9ca12/volumes" May 27 17:43:05.944300 kubelet[2767]: E0527 17:43:05.944113 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-775674897b-2d69w" podUID="c331eeb0-0298-4bf1-b9ac-6574f2928103" May 27 17:43:06.423517 kubelet[2767]: I0527 17:43:06.423467 2767 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:43:06.437609 systemd-networkd[1436]: caliae188c70562: Gained IPv6LL May 27 17:43:06.524615 containerd[1563]: time="2025-05-27T17:43:06.524556087Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb8861621cfbdd6d0bb1e34f98427dbf66a1bf73e1f6d6fd375e140dc07d81c7\" id:\"d9b372b1efff9516ca15913c214d585a290bab722efc6a3f516f61f956219eaa\" pid:3996 exit_status:1 exited_at:{seconds:1748367786 nanos:523849987}" May 27 17:43:06.756947 containerd[1563]: time="2025-05-27T17:43:06.756795267Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb8861621cfbdd6d0bb1e34f98427dbf66a1bf73e1f6d6fd375e140dc07d81c7\" id:\"0445b0d39c45112b7e9611bb4ba92d9f2b0364c7bf9d0509ed4440e7b6cca37a\" pid:4028 exit_status:1 exited_at:{seconds:1748367786 nanos:755983789}" May 27 17:43:07.704302 containerd[1563]: time="2025-05-27T17:43:07.704094438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7897c49469-b54nx,Uid:57105338-de16-4b1a-8b06-4fef5b6ff137,Namespace:calico-apiserver,Attempt:0,}" May 27 17:43:07.705545 containerd[1563]: time="2025-05-27T17:43:07.704982091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7897c49469-5lbz5,Uid:381e8591-eecc-4ffe-bdb4-2226af7b4c96,Namespace:calico-apiserver,Attempt:0,}" May 27 17:43:07.983602 systemd-networkd[1436]: calid715f0a3252: Link UP May 27 17:43:07.987293 systemd-networkd[1436]: calid715f0a3252: Gained carrier May 27 17:43:08.037892 containerd[1563]: 2025-05-27 17:43:07.798 [INFO][4058] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:43:08.037892 containerd[1563]: 2025-05-27 17:43:07.827 [INFO][4058] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--apiserver--7897c49469--b54nx-eth0 calico-apiserver-7897c49469- calico-apiserver 57105338-de16-4b1a-8b06-4fef5b6ff137 832 0 2025-05-27 17:42:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7897c49469 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal calico-apiserver-7897c49469-b54nx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid715f0a3252 [] [] }} ContainerID="9926f125d439e2073e54e71dc0fc1da206b31e3310e9949ae0e01f327623737f" Namespace="calico-apiserver" Pod="calico-apiserver-7897c49469-b54nx" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--apiserver--7897c49469--b54nx-" May 27 17:43:08.037892 containerd[1563]: 2025-05-27 17:43:07.827 [INFO][4058] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9926f125d439e2073e54e71dc0fc1da206b31e3310e9949ae0e01f327623737f" Namespace="calico-apiserver" Pod="calico-apiserver-7897c49469-b54nx" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--apiserver--7897c49469--b54nx-eth0" May 27 17:43:08.037892 containerd[1563]: 2025-05-27 17:43:07.912 [INFO][4095] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9926f125d439e2073e54e71dc0fc1da206b31e3310e9949ae0e01f327623737f" HandleID="k8s-pod-network.9926f125d439e2073e54e71dc0fc1da206b31e3310e9949ae0e01f327623737f" Workload="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--apiserver--7897c49469--b54nx-eth0" May 27 17:43:08.038870 containerd[1563]: 2025-05-27 17:43:07.913 [INFO][4095] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9926f125d439e2073e54e71dc0fc1da206b31e3310e9949ae0e01f327623737f" HandleID="k8s-pod-network.9926f125d439e2073e54e71dc0fc1da206b31e3310e9949ae0e01f327623737f" Workload="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--apiserver--7897c49469--b54nx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b4d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal", "pod":"calico-apiserver-7897c49469-b54nx", "timestamp":"2025-05-27 17:43:07.911793373 +0000 UTC"}, Hostname:"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:43:08.038870 containerd[1563]: 2025-05-27 17:43:07.913 [INFO][4095] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:43:08.038870 containerd[1563]: 2025-05-27 17:43:07.913 [INFO][4095] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:43:08.038870 containerd[1563]: 2025-05-27 17:43:07.913 [INFO][4095] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal' May 27 17:43:08.038870 containerd[1563]: 2025-05-27 17:43:07.925 [INFO][4095] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9926f125d439e2073e54e71dc0fc1da206b31e3310e9949ae0e01f327623737f" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:08.038870 containerd[1563]: 2025-05-27 17:43:07.935 [INFO][4095] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:08.038870 containerd[1563]: 2025-05-27 17:43:07.945 [INFO][4095] ipam/ipam.go 511: Trying affinity for 192.168.28.64/26 host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:08.038870 containerd[1563]: 2025-05-27 17:43:07.949 [INFO][4095] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.64/26 host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:08.039442 containerd[1563]: 2025-05-27 17:43:07.952 [INFO][4095] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.64/26 host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:08.039442 containerd[1563]: 2025-05-27 17:43:07.953 [INFO][4095] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.64/26 handle="k8s-pod-network.9926f125d439e2073e54e71dc0fc1da206b31e3310e9949ae0e01f327623737f" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:08.039442 containerd[1563]: 2025-05-27 17:43:07.955 [INFO][4095] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9926f125d439e2073e54e71dc0fc1da206b31e3310e9949ae0e01f327623737f May 27 17:43:08.039442 containerd[1563]: 2025-05-27 17:43:07.961 [INFO][4095] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.64/26 handle="k8s-pod-network.9926f125d439e2073e54e71dc0fc1da206b31e3310e9949ae0e01f327623737f" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:08.039442 containerd[1563]: 2025-05-27 17:43:07.968 [INFO][4095] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.66/26] block=192.168.28.64/26 handle="k8s-pod-network.9926f125d439e2073e54e71dc0fc1da206b31e3310e9949ae0e01f327623737f" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:08.039442 containerd[1563]: 2025-05-27 17:43:07.968 [INFO][4095] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.66/26] handle="k8s-pod-network.9926f125d439e2073e54e71dc0fc1da206b31e3310e9949ae0e01f327623737f" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:08.039442 containerd[1563]: 2025-05-27 17:43:07.969 [INFO][4095] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:43:08.039442 containerd[1563]: 2025-05-27 17:43:07.969 [INFO][4095] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.66/26] IPv6=[] ContainerID="9926f125d439e2073e54e71dc0fc1da206b31e3310e9949ae0e01f327623737f" HandleID="k8s-pod-network.9926f125d439e2073e54e71dc0fc1da206b31e3310e9949ae0e01f327623737f" Workload="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--apiserver--7897c49469--b54nx-eth0" May 27 17:43:08.040372 containerd[1563]: 2025-05-27 17:43:07.972 [INFO][4058] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9926f125d439e2073e54e71dc0fc1da206b31e3310e9949ae0e01f327623737f" Namespace="calico-apiserver" Pod="calico-apiserver-7897c49469-b54nx" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--apiserver--7897c49469--b54nx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--apiserver--7897c49469--b54nx-eth0", GenerateName:"calico-apiserver-7897c49469-", Namespace:"calico-apiserver", SelfLink:"", UID:"57105338-de16-4b1a-8b06-4fef5b6ff137", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 42, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7897c49469", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-7897c49469-b54nx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid715f0a3252", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:43:08.040655 containerd[1563]: 2025-05-27 17:43:07.972 [INFO][4058] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.66/32] ContainerID="9926f125d439e2073e54e71dc0fc1da206b31e3310e9949ae0e01f327623737f" Namespace="calico-apiserver" Pod="calico-apiserver-7897c49469-b54nx" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--apiserver--7897c49469--b54nx-eth0" May 27 17:43:08.040655 containerd[1563]: 2025-05-27 17:43:07.973 [INFO][4058] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid715f0a3252 ContainerID="9926f125d439e2073e54e71dc0fc1da206b31e3310e9949ae0e01f327623737f" Namespace="calico-apiserver" Pod="calico-apiserver-7897c49469-b54nx" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--apiserver--7897c49469--b54nx-eth0" May 27 17:43:08.040655 containerd[1563]: 2025-05-27 17:43:07.999 [INFO][4058] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9926f125d439e2073e54e71dc0fc1da206b31e3310e9949ae0e01f327623737f" Namespace="calico-apiserver" Pod="calico-apiserver-7897c49469-b54nx" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--apiserver--7897c49469--b54nx-eth0" May 27 17:43:08.040834 containerd[1563]: 2025-05-27 17:43:08.010 [INFO][4058] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9926f125d439e2073e54e71dc0fc1da206b31e3310e9949ae0e01f327623737f" Namespace="calico-apiserver" Pod="calico-apiserver-7897c49469-b54nx" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--apiserver--7897c49469--b54nx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--apiserver--7897c49469--b54nx-eth0", GenerateName:"calico-apiserver-7897c49469-", Namespace:"calico-apiserver", SelfLink:"", UID:"57105338-de16-4b1a-8b06-4fef5b6ff137", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 42, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7897c49469", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal", ContainerID:"9926f125d439e2073e54e71dc0fc1da206b31e3310e9949ae0e01f327623737f", Pod:"calico-apiserver-7897c49469-b54nx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid715f0a3252", MAC:"62:45:41:e2:09:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:43:08.040834 containerd[1563]: 2025-05-27 17:43:08.034 [INFO][4058] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9926f125d439e2073e54e71dc0fc1da206b31e3310e9949ae0e01f327623737f" Namespace="calico-apiserver" Pod="calico-apiserver-7897c49469-b54nx" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--apiserver--7897c49469--b54nx-eth0" May 27 17:43:08.093986 containerd[1563]: time="2025-05-27T17:43:08.092911400Z" level=info msg="connecting to shim 9926f125d439e2073e54e71dc0fc1da206b31e3310e9949ae0e01f327623737f" address="unix:///run/containerd/s/43a124965c6472e2968287445bfb8e0a58547a6f982616f33c89b7eccc073216" namespace=k8s.io protocol=ttrpc version=3 May 27 17:43:08.116473 systemd-networkd[1436]: cali4e0e1b7a481: Link UP May 27 17:43:08.120547 systemd-networkd[1436]: cali4e0e1b7a481: Gained carrier May 27 17:43:08.167003 containerd[1563]: 2025-05-27 17:43:07.792 [INFO][4056] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:43:08.167003 containerd[1563]: 2025-05-27 17:43:07.817 [INFO][4056] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--apiserver--7897c49469--5lbz5-eth0 calico-apiserver-7897c49469- calico-apiserver 381e8591-eecc-4ffe-bdb4-2226af7b4c96 833 0 2025-05-27 17:42:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7897c49469 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal calico-apiserver-7897c49469-5lbz5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4e0e1b7a481 [] [] }} ContainerID="57fd234fe359fe0ade5bf6257a5d86430bfb51caf2b1e4c8fbee69e5c531f3fa" Namespace="calico-apiserver" Pod="calico-apiserver-7897c49469-5lbz5" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--apiserver--7897c49469--5lbz5-" May 27 17:43:08.167003 containerd[1563]: 2025-05-27 17:43:07.817 [INFO][4056] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="57fd234fe359fe0ade5bf6257a5d86430bfb51caf2b1e4c8fbee69e5c531f3fa" Namespace="calico-apiserver" Pod="calico-apiserver-7897c49469-5lbz5" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--apiserver--7897c49469--5lbz5-eth0" May 27 17:43:08.167003 containerd[1563]: 2025-05-27 17:43:07.912 [INFO][4090] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57fd234fe359fe0ade5bf6257a5d86430bfb51caf2b1e4c8fbee69e5c531f3fa" HandleID="k8s-pod-network.57fd234fe359fe0ade5bf6257a5d86430bfb51caf2b1e4c8fbee69e5c531f3fa" Workload="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--apiserver--7897c49469--5lbz5-eth0" May 27 17:43:08.167003 containerd[1563]: 2025-05-27 17:43:07.913 [INFO][4090] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="57fd234fe359fe0ade5bf6257a5d86430bfb51caf2b1e4c8fbee69e5c531f3fa" HandleID="k8s-pod-network.57fd234fe359fe0ade5bf6257a5d86430bfb51caf2b1e4c8fbee69e5c531f3fa" Workload="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--apiserver--7897c49469--5lbz5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d3e40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal", "pod":"calico-apiserver-7897c49469-5lbz5", "timestamp":"2025-05-27 17:43:07.911208207 +0000 UTC"}, Hostname:"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:43:08.167003 containerd[1563]: 2025-05-27 17:43:07.914 [INFO][4090] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:43:08.167003 containerd[1563]: 2025-05-27 17:43:07.968 [INFO][4090] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:43:08.167003 containerd[1563]: 2025-05-27 17:43:07.969 [INFO][4090] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal' May 27 17:43:08.167003 containerd[1563]: 2025-05-27 17:43:08.026 [INFO][4090] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.57fd234fe359fe0ade5bf6257a5d86430bfb51caf2b1e4c8fbee69e5c531f3fa" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:08.167003 containerd[1563]: 2025-05-27 17:43:08.040 [INFO][4090] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:08.167003 containerd[1563]: 2025-05-27 17:43:08.053 [INFO][4090] ipam/ipam.go 511: Trying affinity for 192.168.28.64/26 host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:08.167003 containerd[1563]: 2025-05-27 17:43:08.058 [INFO][4090] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.64/26 host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:08.167003 containerd[1563]: 2025-05-27 17:43:08.062 [INFO][4090] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.64/26 host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:08.167003 containerd[1563]: 2025-05-27 17:43:08.062 [INFO][4090] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.64/26 handle="k8s-pod-network.57fd234fe359fe0ade5bf6257a5d86430bfb51caf2b1e4c8fbee69e5c531f3fa" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:08.167003 containerd[1563]: 2025-05-27 17:43:08.070 [INFO][4090] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.57fd234fe359fe0ade5bf6257a5d86430bfb51caf2b1e4c8fbee69e5c531f3fa May 27 17:43:08.167003 containerd[1563]: 2025-05-27 17:43:08.080 [INFO][4090] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.64/26 handle="k8s-pod-network.57fd234fe359fe0ade5bf6257a5d86430bfb51caf2b1e4c8fbee69e5c531f3fa" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:08.167003 containerd[1563]: 2025-05-27 17:43:08.094 [INFO][4090] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.67/26] block=192.168.28.64/26 handle="k8s-pod-network.57fd234fe359fe0ade5bf6257a5d86430bfb51caf2b1e4c8fbee69e5c531f3fa" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:08.167003 containerd[1563]: 2025-05-27 17:43:08.095 [INFO][4090] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.67/26] handle="k8s-pod-network.57fd234fe359fe0ade5bf6257a5d86430bfb51caf2b1e4c8fbee69e5c531f3fa" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:08.167003 containerd[1563]: 2025-05-27 17:43:08.095 [INFO][4090] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:43:08.167003 containerd[1563]: 2025-05-27 17:43:08.095 [INFO][4090] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.67/26] IPv6=[] ContainerID="57fd234fe359fe0ade5bf6257a5d86430bfb51caf2b1e4c8fbee69e5c531f3fa" HandleID="k8s-pod-network.57fd234fe359fe0ade5bf6257a5d86430bfb51caf2b1e4c8fbee69e5c531f3fa" Workload="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--apiserver--7897c49469--5lbz5-eth0" May 27 17:43:08.169524 containerd[1563]: 2025-05-27 17:43:08.104 [INFO][4056] cni-plugin/k8s.go 418: Populated endpoint ContainerID="57fd234fe359fe0ade5bf6257a5d86430bfb51caf2b1e4c8fbee69e5c531f3fa" Namespace="calico-apiserver" Pod="calico-apiserver-7897c49469-5lbz5" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--apiserver--7897c49469--5lbz5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--apiserver--7897c49469--5lbz5-eth0", GenerateName:"calico-apiserver-7897c49469-", Namespace:"calico-apiserver", SelfLink:"", UID:"381e8591-eecc-4ffe-bdb4-2226af7b4c96", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 42, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7897c49469", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-7897c49469-5lbz5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e0e1b7a481", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:43:08.169524 containerd[1563]: 2025-05-27 17:43:08.105 [INFO][4056] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.67/32] ContainerID="57fd234fe359fe0ade5bf6257a5d86430bfb51caf2b1e4c8fbee69e5c531f3fa" Namespace="calico-apiserver" Pod="calico-apiserver-7897c49469-5lbz5" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--apiserver--7897c49469--5lbz5-eth0" May 27 17:43:08.169524 containerd[1563]: 2025-05-27 17:43:08.105 [INFO][4056] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e0e1b7a481 ContainerID="57fd234fe359fe0ade5bf6257a5d86430bfb51caf2b1e4c8fbee69e5c531f3fa" Namespace="calico-apiserver" Pod="calico-apiserver-7897c49469-5lbz5" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--apiserver--7897c49469--5lbz5-eth0" May 27 17:43:08.169524 containerd[1563]: 2025-05-27 17:43:08.124 [INFO][4056] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57fd234fe359fe0ade5bf6257a5d86430bfb51caf2b1e4c8fbee69e5c531f3fa" Namespace="calico-apiserver" Pod="calico-apiserver-7897c49469-5lbz5" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--apiserver--7897c49469--5lbz5-eth0" May 27 17:43:08.169524 containerd[1563]: 2025-05-27 17:43:08.125 [INFO][4056] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="57fd234fe359fe0ade5bf6257a5d86430bfb51caf2b1e4c8fbee69e5c531f3fa" Namespace="calico-apiserver" Pod="calico-apiserver-7897c49469-5lbz5" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--apiserver--7897c49469--5lbz5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--apiserver--7897c49469--5lbz5-eth0", GenerateName:"calico-apiserver-7897c49469-", Namespace:"calico-apiserver", SelfLink:"", UID:"381e8591-eecc-4ffe-bdb4-2226af7b4c96", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 42, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7897c49469", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal", ContainerID:"57fd234fe359fe0ade5bf6257a5d86430bfb51caf2b1e4c8fbee69e5c531f3fa", Pod:"calico-apiserver-7897c49469-5lbz5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e0e1b7a481", MAC:"7e:ac:d3:99:2c:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:43:08.169524 containerd[1563]: 2025-05-27 17:43:08.150 [INFO][4056] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="57fd234fe359fe0ade5bf6257a5d86430bfb51caf2b1e4c8fbee69e5c531f3fa" Namespace="calico-apiserver" Pod="calico-apiserver-7897c49469-5lbz5" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--apiserver--7897c49469--5lbz5-eth0" May 27 17:43:08.200763 systemd[1]: Started cri-containerd-9926f125d439e2073e54e71dc0fc1da206b31e3310e9949ae0e01f327623737f.scope - libcontainer container 9926f125d439e2073e54e71dc0fc1da206b31e3310e9949ae0e01f327623737f. May 27 17:43:08.242126 containerd[1563]: time="2025-05-27T17:43:08.239439957Z" level=info msg="connecting to shim 57fd234fe359fe0ade5bf6257a5d86430bfb51caf2b1e4c8fbee69e5c531f3fa" address="unix:///run/containerd/s/d263665d4010427e920dfbc5765c97a3688b4094a7faf67c31fae4d5de5e9952" namespace=k8s.io protocol=ttrpc version=3 May 27 17:43:08.284771 systemd[1]: Started cri-containerd-57fd234fe359fe0ade5bf6257a5d86430bfb51caf2b1e4c8fbee69e5c531f3fa.scope - libcontainer container 57fd234fe359fe0ade5bf6257a5d86430bfb51caf2b1e4c8fbee69e5c531f3fa. May 27 17:43:08.337534 containerd[1563]: time="2025-05-27T17:43:08.337460900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7897c49469-b54nx,Uid:57105338-de16-4b1a-8b06-4fef5b6ff137,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9926f125d439e2073e54e71dc0fc1da206b31e3310e9949ae0e01f327623737f\"" May 27 17:43:08.340468 containerd[1563]: time="2025-05-27T17:43:08.340428680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 17:43:08.382140 containerd[1563]: time="2025-05-27T17:43:08.382035158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7897c49469-5lbz5,Uid:381e8591-eecc-4ffe-bdb4-2226af7b4c96,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"57fd234fe359fe0ade5bf6257a5d86430bfb51caf2b1e4c8fbee69e5c531f3fa\"" May 27 17:43:08.703425 containerd[1563]: time="2025-05-27T17:43:08.703342406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-hnfkh,Uid:17a31152-9656-4e7b-b59d-6595424c4d7e,Namespace:calico-system,Attempt:0,}" May 27 17:43:08.885162 systemd-networkd[1436]: cali62bdeab4ae8: Link UP May 27 17:43:08.887846 systemd-networkd[1436]: cali62bdeab4ae8: Gained carrier May 27 17:43:08.912509 containerd[1563]: 2025-05-27 17:43:08.752 [INFO][4225] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:43:08.912509 containerd[1563]: 2025-05-27 17:43:08.769 [INFO][4225] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-goldmane--78d55f7ddc--hnfkh-eth0 goldmane-78d55f7ddc- calico-system 17a31152-9656-4e7b-b59d-6595424c4d7e 834 0 2025-05-27 17:42:45 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal goldmane-78d55f7ddc-hnfkh eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali62bdeab4ae8 [] [] }} ContainerID="26a641555a119230b5ffe03a318b32d701d65c12684d13963ac99a929f66a2f1" Namespace="calico-system" Pod="goldmane-78d55f7ddc-hnfkh" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-goldmane--78d55f7ddc--hnfkh-" May 27 17:43:08.912509 containerd[1563]: 2025-05-27 17:43:08.769 [INFO][4225] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="26a641555a119230b5ffe03a318b32d701d65c12684d13963ac99a929f66a2f1" Namespace="calico-system" Pod="goldmane-78d55f7ddc-hnfkh" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-goldmane--78d55f7ddc--hnfkh-eth0" May 27 17:43:08.912509 containerd[1563]: 2025-05-27 17:43:08.826 [INFO][4235] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="26a641555a119230b5ffe03a318b32d701d65c12684d13963ac99a929f66a2f1" HandleID="k8s-pod-network.26a641555a119230b5ffe03a318b32d701d65c12684d13963ac99a929f66a2f1" Workload="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-goldmane--78d55f7ddc--hnfkh-eth0" May 27 17:43:08.912509 containerd[1563]: 2025-05-27 17:43:08.827 [INFO][4235] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="26a641555a119230b5ffe03a318b32d701d65c12684d13963ac99a929f66a2f1" HandleID="k8s-pod-network.26a641555a119230b5ffe03a318b32d701d65c12684d13963ac99a929f66a2f1" Workload="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-goldmane--78d55f7ddc--hnfkh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000232fd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal", "pod":"goldmane-78d55f7ddc-hnfkh", "timestamp":"2025-05-27 17:43:08.826896585 +0000 UTC"}, Hostname:"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:43:08.912509 containerd[1563]: 2025-05-27 17:43:08.827 [INFO][4235] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:43:08.912509 containerd[1563]: 2025-05-27 17:43:08.827 [INFO][4235] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:43:08.912509 containerd[1563]: 2025-05-27 17:43:08.827 [INFO][4235] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal' May 27 17:43:08.912509 containerd[1563]: 2025-05-27 17:43:08.838 [INFO][4235] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.26a641555a119230b5ffe03a318b32d701d65c12684d13963ac99a929f66a2f1" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:08.912509 containerd[1563]: 2025-05-27 17:43:08.844 [INFO][4235] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:08.912509 containerd[1563]: 2025-05-27 17:43:08.850 [INFO][4235] ipam/ipam.go 511: Trying affinity for 192.168.28.64/26 host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:08.912509 containerd[1563]: 2025-05-27 17:43:08.852 [INFO][4235] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.64/26 host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:08.912509 containerd[1563]: 2025-05-27 17:43:08.858 [INFO][4235] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.64/26 host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:08.912509 containerd[1563]: 2025-05-27 17:43:08.858 [INFO][4235] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.64/26 handle="k8s-pod-network.26a641555a119230b5ffe03a318b32d701d65c12684d13963ac99a929f66a2f1" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:08.912509 containerd[1563]: 2025-05-27 17:43:08.863 [INFO][4235] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.26a641555a119230b5ffe03a318b32d701d65c12684d13963ac99a929f66a2f1 May 27 17:43:08.912509 containerd[1563]: 2025-05-27 17:43:08.868 [INFO][4235] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.64/26 handle="k8s-pod-network.26a641555a119230b5ffe03a318b32d701d65c12684d13963ac99a929f66a2f1" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:08.912509 containerd[1563]: 2025-05-27 17:43:08.877 [INFO][4235] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.68/26] block=192.168.28.64/26 handle="k8s-pod-network.26a641555a119230b5ffe03a318b32d701d65c12684d13963ac99a929f66a2f1" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:08.912509 containerd[1563]: 2025-05-27 17:43:08.877 [INFO][4235] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.68/26] handle="k8s-pod-network.26a641555a119230b5ffe03a318b32d701d65c12684d13963ac99a929f66a2f1" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:08.912509 containerd[1563]: 2025-05-27 17:43:08.877 [INFO][4235] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:43:08.912509 containerd[1563]: 2025-05-27 17:43:08.878 [INFO][4235] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.68/26] IPv6=[] ContainerID="26a641555a119230b5ffe03a318b32d701d65c12684d13963ac99a929f66a2f1" HandleID="k8s-pod-network.26a641555a119230b5ffe03a318b32d701d65c12684d13963ac99a929f66a2f1" Workload="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-goldmane--78d55f7ddc--hnfkh-eth0" May 27 17:43:08.914591 containerd[1563]: 2025-05-27 17:43:08.880 [INFO][4225] cni-plugin/k8s.go 418: Populated endpoint ContainerID="26a641555a119230b5ffe03a318b32d701d65c12684d13963ac99a929f66a2f1" Namespace="calico-system" Pod="goldmane-78d55f7ddc-hnfkh" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-goldmane--78d55f7ddc--hnfkh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-goldmane--78d55f7ddc--hnfkh-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"17a31152-9656-4e7b-b59d-6595424c4d7e", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 42, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal", ContainerID:"", Pod:"goldmane-78d55f7ddc-hnfkh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.28.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali62bdeab4ae8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:43:08.914591 containerd[1563]: 2025-05-27 17:43:08.881 [INFO][4225] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.68/32] ContainerID="26a641555a119230b5ffe03a318b32d701d65c12684d13963ac99a929f66a2f1" Namespace="calico-system" Pod="goldmane-78d55f7ddc-hnfkh" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-goldmane--78d55f7ddc--hnfkh-eth0" May 27 17:43:08.914591 containerd[1563]: 2025-05-27 17:43:08.881 [INFO][4225] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali62bdeab4ae8 ContainerID="26a641555a119230b5ffe03a318b32d701d65c12684d13963ac99a929f66a2f1" Namespace="calico-system" Pod="goldmane-78d55f7ddc-hnfkh" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-goldmane--78d55f7ddc--hnfkh-eth0" May 27 17:43:08.914591 containerd[1563]: 2025-05-27 17:43:08.886 [INFO][4225] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="26a641555a119230b5ffe03a318b32d701d65c12684d13963ac99a929f66a2f1" Namespace="calico-system" Pod="goldmane-78d55f7ddc-hnfkh" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-goldmane--78d55f7ddc--hnfkh-eth0" May 27 17:43:08.914591 containerd[1563]: 2025-05-27 17:43:08.887 [INFO][4225] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="26a641555a119230b5ffe03a318b32d701d65c12684d13963ac99a929f66a2f1" Namespace="calico-system" Pod="goldmane-78d55f7ddc-hnfkh" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-goldmane--78d55f7ddc--hnfkh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-goldmane--78d55f7ddc--hnfkh-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"17a31152-9656-4e7b-b59d-6595424c4d7e", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 42, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal", ContainerID:"26a641555a119230b5ffe03a318b32d701d65c12684d13963ac99a929f66a2f1", Pod:"goldmane-78d55f7ddc-hnfkh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.28.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali62bdeab4ae8", MAC:"9e:7b:75:27:ff:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:43:08.914591 containerd[1563]: 2025-05-27 17:43:08.906 [INFO][4225] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="26a641555a119230b5ffe03a318b32d701d65c12684d13963ac99a929f66a2f1" Namespace="calico-system" Pod="goldmane-78d55f7ddc-hnfkh" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-goldmane--78d55f7ddc--hnfkh-eth0" May 27 17:43:08.949000 containerd[1563]: time="2025-05-27T17:43:08.948926282Z" level=info msg="connecting to shim 26a641555a119230b5ffe03a318b32d701d65c12684d13963ac99a929f66a2f1" address="unix:///run/containerd/s/199ce340895122218877b50ebb664d1e1055ddad00a94a5f9486367b6783198a" namespace=k8s.io protocol=ttrpc version=3 May 27 17:43:08.992480 systemd[1]: Started cri-containerd-26a641555a119230b5ffe03a318b32d701d65c12684d13963ac99a929f66a2f1.scope - libcontainer container 26a641555a119230b5ffe03a318b32d701d65c12684d13963ac99a929f66a2f1. May 27 17:43:09.071616 containerd[1563]: time="2025-05-27T17:43:09.071543723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-hnfkh,Uid:17a31152-9656-4e7b-b59d-6595424c4d7e,Namespace:calico-system,Attempt:0,} returns sandbox id \"26a641555a119230b5ffe03a318b32d701d65c12684d13963ac99a929f66a2f1\"" May 27 17:43:09.316472 systemd-networkd[1436]: cali4e0e1b7a481: Gained IPv6LL May 27 17:43:09.700936 systemd-networkd[1436]: calid715f0a3252: Gained IPv6LL May 27 17:43:09.706190 containerd[1563]: time="2025-05-27T17:43:09.705964357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58b97d7dbc-z4gdm,Uid:a0b18132-1e2b-442d-9188-edbebd10f7b5,Namespace:calico-system,Attempt:0,}" May 27 17:43:09.911122 systemd-networkd[1436]: calif12cf5827a3: Link UP May 27 17:43:09.914504 systemd-networkd[1436]: calif12cf5827a3: Gained carrier May 27 17:43:09.942756 containerd[1563]: 2025-05-27 17:43:09.766 [INFO][4314] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:43:09.942756 containerd[1563]: 2025-05-27 17:43:09.785 [INFO][4314] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--kube--controllers--58b97d7dbc--z4gdm-eth0 calico-kube-controllers-58b97d7dbc- calico-system a0b18132-1e2b-442d-9188-edbebd10f7b5 822 0 2025-05-27 17:42:46 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:58b97d7dbc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal calico-kube-controllers-58b97d7dbc-z4gdm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif12cf5827a3 [] [] }} ContainerID="abf575551316cfd29049147c6cd172a57973b88f780e5d0a0d3deb331063c7e2" Namespace="calico-system" Pod="calico-kube-controllers-58b97d7dbc-z4gdm" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--kube--controllers--58b97d7dbc--z4gdm-" May 27 17:43:09.942756 containerd[1563]: 2025-05-27 17:43:09.785 [INFO][4314] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="abf575551316cfd29049147c6cd172a57973b88f780e5d0a0d3deb331063c7e2" Namespace="calico-system" Pod="calico-kube-controllers-58b97d7dbc-z4gdm" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--kube--controllers--58b97d7dbc--z4gdm-eth0" May 27 17:43:09.942756 containerd[1563]: 2025-05-27 17:43:09.835 [INFO][4329] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="abf575551316cfd29049147c6cd172a57973b88f780e5d0a0d3deb331063c7e2" HandleID="k8s-pod-network.abf575551316cfd29049147c6cd172a57973b88f780e5d0a0d3deb331063c7e2" Workload="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--kube--controllers--58b97d7dbc--z4gdm-eth0" May 27 17:43:09.942756 containerd[1563]: 2025-05-27 17:43:09.835 [INFO][4329] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="abf575551316cfd29049147c6cd172a57973b88f780e5d0a0d3deb331063c7e2" HandleID="k8s-pod-network.abf575551316cfd29049147c6cd172a57973b88f780e5d0a0d3deb331063c7e2" Workload="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--kube--controllers--58b97d7dbc--z4gdm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d3a60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal", "pod":"calico-kube-controllers-58b97d7dbc-z4gdm", "timestamp":"2025-05-27 17:43:09.835132938 +0000 UTC"}, Hostname:"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:43:09.942756 containerd[1563]: 2025-05-27 17:43:09.835 [INFO][4329] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:43:09.942756 containerd[1563]: 2025-05-27 17:43:09.835 [INFO][4329] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:43:09.942756 containerd[1563]: 2025-05-27 17:43:09.835 [INFO][4329] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal' May 27 17:43:09.942756 containerd[1563]: 2025-05-27 17:43:09.846 [INFO][4329] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.abf575551316cfd29049147c6cd172a57973b88f780e5d0a0d3deb331063c7e2" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:09.942756 containerd[1563]: 2025-05-27 17:43:09.853 [INFO][4329] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:09.942756 containerd[1563]: 2025-05-27 17:43:09.859 [INFO][4329] ipam/ipam.go 511: Trying affinity for 192.168.28.64/26 host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:09.942756 containerd[1563]: 2025-05-27 17:43:09.862 [INFO][4329] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.64/26 host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:09.942756 containerd[1563]: 2025-05-27 17:43:09.865 [INFO][4329] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.64/26 host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:09.942756 containerd[1563]: 2025-05-27 17:43:09.866 [INFO][4329] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.64/26 handle="k8s-pod-network.abf575551316cfd29049147c6cd172a57973b88f780e5d0a0d3deb331063c7e2" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:09.942756 containerd[1563]: 2025-05-27 17:43:09.868 [INFO][4329] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.abf575551316cfd29049147c6cd172a57973b88f780e5d0a0d3deb331063c7e2 May 27 17:43:09.942756 containerd[1563]: 2025-05-27 17:43:09.877 [INFO][4329] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.64/26 handle="k8s-pod-network.abf575551316cfd29049147c6cd172a57973b88f780e5d0a0d3deb331063c7e2" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:09.942756 containerd[1563]: 2025-05-27 17:43:09.893 [INFO][4329] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.69/26] block=192.168.28.64/26 handle="k8s-pod-network.abf575551316cfd29049147c6cd172a57973b88f780e5d0a0d3deb331063c7e2" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:09.942756 containerd[1563]: 2025-05-27 17:43:09.893 [INFO][4329] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.69/26] handle="k8s-pod-network.abf575551316cfd29049147c6cd172a57973b88f780e5d0a0d3deb331063c7e2" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:09.942756 containerd[1563]: 2025-05-27 17:43:09.894 [INFO][4329] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:43:09.942756 containerd[1563]: 2025-05-27 17:43:09.894 [INFO][4329] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.69/26] IPv6=[] ContainerID="abf575551316cfd29049147c6cd172a57973b88f780e5d0a0d3deb331063c7e2" HandleID="k8s-pod-network.abf575551316cfd29049147c6cd172a57973b88f780e5d0a0d3deb331063c7e2" Workload="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--kube--controllers--58b97d7dbc--z4gdm-eth0" May 27 17:43:09.945968 containerd[1563]: 2025-05-27 17:43:09.897 [INFO][4314] cni-plugin/k8s.go 418: Populated endpoint ContainerID="abf575551316cfd29049147c6cd172a57973b88f780e5d0a0d3deb331063c7e2" Namespace="calico-system" Pod="calico-kube-controllers-58b97d7dbc-z4gdm" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--kube--controllers--58b97d7dbc--z4gdm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--kube--controllers--58b97d7dbc--z4gdm-eth0", GenerateName:"calico-kube-controllers-58b97d7dbc-", Namespace:"calico-system", SelfLink:"", UID:"a0b18132-1e2b-442d-9188-edbebd10f7b5", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 42, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58b97d7dbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-58b97d7dbc-z4gdm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.28.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif12cf5827a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:43:09.945968 containerd[1563]: 2025-05-27 17:43:09.897 [INFO][4314] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.69/32] ContainerID="abf575551316cfd29049147c6cd172a57973b88f780e5d0a0d3deb331063c7e2" Namespace="calico-system" Pod="calico-kube-controllers-58b97d7dbc-z4gdm" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--kube--controllers--58b97d7dbc--z4gdm-eth0" May 27 17:43:09.945968 containerd[1563]: 2025-05-27 17:43:09.897 [INFO][4314] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif12cf5827a3 ContainerID="abf575551316cfd29049147c6cd172a57973b88f780e5d0a0d3deb331063c7e2" Namespace="calico-system" Pod="calico-kube-controllers-58b97d7dbc-z4gdm" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--kube--controllers--58b97d7dbc--z4gdm-eth0" May 27 17:43:09.945968 containerd[1563]: 2025-05-27 17:43:09.917 [INFO][4314] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="abf575551316cfd29049147c6cd172a57973b88f780e5d0a0d3deb331063c7e2" Namespace="calico-system" Pod="calico-kube-controllers-58b97d7dbc-z4gdm" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--kube--controllers--58b97d7dbc--z4gdm-eth0" May 27 17:43:09.945968 containerd[1563]: 2025-05-27 17:43:09.918 [INFO][4314] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="abf575551316cfd29049147c6cd172a57973b88f780e5d0a0d3deb331063c7e2" Namespace="calico-system" Pod="calico-kube-controllers-58b97d7dbc-z4gdm" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--kube--controllers--58b97d7dbc--z4gdm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--kube--controllers--58b97d7dbc--z4gdm-eth0", GenerateName:"calico-kube-controllers-58b97d7dbc-", Namespace:"calico-system", SelfLink:"", UID:"a0b18132-1e2b-442d-9188-edbebd10f7b5", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 42, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58b97d7dbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal", ContainerID:"abf575551316cfd29049147c6cd172a57973b88f780e5d0a0d3deb331063c7e2", Pod:"calico-kube-controllers-58b97d7dbc-z4gdm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.28.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif12cf5827a3", MAC:"ce:68:ef:9a:46:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:43:09.945968 containerd[1563]: 2025-05-27 17:43:09.940 [INFO][4314] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="abf575551316cfd29049147c6cd172a57973b88f780e5d0a0d3deb331063c7e2" Namespace="calico-system" Pod="calico-kube-controllers-58b97d7dbc-z4gdm" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-calico--kube--controllers--58b97d7dbc--z4gdm-eth0" May 27 17:43:09.986913 containerd[1563]: time="2025-05-27T17:43:09.986782692Z" level=info msg="connecting to shim abf575551316cfd29049147c6cd172a57973b88f780e5d0a0d3deb331063c7e2" address="unix:///run/containerd/s/6e90c95936d3524ad070ade01ce27aa4424561787cb4081a9ad5b211eca50f8b" namespace=k8s.io protocol=ttrpc version=3 May 27 17:43:10.043533 systemd[1]: Started cri-containerd-abf575551316cfd29049147c6cd172a57973b88f780e5d0a0d3deb331063c7e2.scope - libcontainer container abf575551316cfd29049147c6cd172a57973b88f780e5d0a0d3deb331063c7e2. May 27 17:43:10.132806 containerd[1563]: time="2025-05-27T17:43:10.132755336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58b97d7dbc-z4gdm,Uid:a0b18132-1e2b-442d-9188-edbebd10f7b5,Namespace:calico-system,Attempt:0,} returns sandbox id \"abf575551316cfd29049147c6cd172a57973b88f780e5d0a0d3deb331063c7e2\"" May 27 17:43:10.469505 systemd-networkd[1436]: cali62bdeab4ae8: Gained IPv6LL May 27 17:43:10.704639 containerd[1563]: time="2025-05-27T17:43:10.704572185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4vmcb,Uid:bae642d2-ac66-4c94-b401-a475f27bd04d,Namespace:calico-system,Attempt:0,}" May 27 17:43:11.067565 systemd-networkd[1436]: cali128f8b31a9f: Link UP May 27 17:43:11.071992 systemd-networkd[1436]: cali128f8b31a9f: Gained carrier May 27 17:43:11.106641 containerd[1563]: 2025-05-27 17:43:10.783 [INFO][4414] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:43:11.106641 containerd[1563]: 2025-05-27 17:43:10.816 [INFO][4414] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-csi--node--driver--4vmcb-eth0 csi-node-driver- calico-system bae642d2-ac66-4c94-b401-a475f27bd04d 724 0 2025-05-27 17:42:46 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal csi-node-driver-4vmcb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali128f8b31a9f [] [] }} ContainerID="6ab463930bf20f5ce272ea7e7a2376b0bfc6c47c6a6305b6a0ef65d9e8a6be4d" Namespace="calico-system" Pod="csi-node-driver-4vmcb" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-csi--node--driver--4vmcb-" May 27 17:43:11.106641 containerd[1563]: 2025-05-27 17:43:10.817 [INFO][4414] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ab463930bf20f5ce272ea7e7a2376b0bfc6c47c6a6305b6a0ef65d9e8a6be4d" Namespace="calico-system" Pod="csi-node-driver-4vmcb" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-csi--node--driver--4vmcb-eth0" May 27 17:43:11.106641 containerd[1563]: 2025-05-27 17:43:10.967 [INFO][4428] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ab463930bf20f5ce272ea7e7a2376b0bfc6c47c6a6305b6a0ef65d9e8a6be4d" HandleID="k8s-pod-network.6ab463930bf20f5ce272ea7e7a2376b0bfc6c47c6a6305b6a0ef65d9e8a6be4d" Workload="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-csi--node--driver--4vmcb-eth0" May 27 17:43:11.106641 containerd[1563]: 2025-05-27 17:43:10.967 [INFO][4428] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6ab463930bf20f5ce272ea7e7a2376b0bfc6c47c6a6305b6a0ef65d9e8a6be4d" HandleID="k8s-pod-network.6ab463930bf20f5ce272ea7e7a2376b0bfc6c47c6a6305b6a0ef65d9e8a6be4d" Workload="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-csi--node--driver--4vmcb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000182730), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal", "pod":"csi-node-driver-4vmcb", "timestamp":"2025-05-27 17:43:10.966798519 +0000 UTC"}, Hostname:"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:43:11.106641 containerd[1563]: 2025-05-27 17:43:10.967 [INFO][4428] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:43:11.106641 containerd[1563]: 2025-05-27 17:43:10.967 [INFO][4428] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:43:11.106641 containerd[1563]: 2025-05-27 17:43:10.967 [INFO][4428] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal' May 27 17:43:11.106641 containerd[1563]: 2025-05-27 17:43:10.987 [INFO][4428] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6ab463930bf20f5ce272ea7e7a2376b0bfc6c47c6a6305b6a0ef65d9e8a6be4d" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:11.106641 containerd[1563]: 2025-05-27 17:43:11.000 [INFO][4428] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:11.106641 containerd[1563]: 2025-05-27 17:43:11.013 [INFO][4428] ipam/ipam.go 511: Trying affinity for 192.168.28.64/26 host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:11.106641 containerd[1563]: 2025-05-27 17:43:11.018 [INFO][4428] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.64/26 host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:11.106641 containerd[1563]: 2025-05-27 17:43:11.026 [INFO][4428] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.64/26 host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:11.106641 containerd[1563]: 2025-05-27 17:43:11.026 [INFO][4428] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.64/26 handle="k8s-pod-network.6ab463930bf20f5ce272ea7e7a2376b0bfc6c47c6a6305b6a0ef65d9e8a6be4d" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:11.106641 containerd[1563]: 2025-05-27 17:43:11.029 [INFO][4428] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6ab463930bf20f5ce272ea7e7a2376b0bfc6c47c6a6305b6a0ef65d9e8a6be4d May 27 17:43:11.106641 containerd[1563]: 2025-05-27 17:43:11.039 [INFO][4428] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.64/26 handle="k8s-pod-network.6ab463930bf20f5ce272ea7e7a2376b0bfc6c47c6a6305b6a0ef65d9e8a6be4d" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:11.106641 containerd[1563]: 2025-05-27 17:43:11.054 [INFO][4428] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.70/26] block=192.168.28.64/26 handle="k8s-pod-network.6ab463930bf20f5ce272ea7e7a2376b0bfc6c47c6a6305b6a0ef65d9e8a6be4d" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:11.106641 containerd[1563]: 2025-05-27 17:43:11.054 [INFO][4428] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.70/26] handle="k8s-pod-network.6ab463930bf20f5ce272ea7e7a2376b0bfc6c47c6a6305b6a0ef65d9e8a6be4d" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:11.106641 containerd[1563]: 2025-05-27 17:43:11.054 [INFO][4428] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:43:11.106641 containerd[1563]: 2025-05-27 17:43:11.054 [INFO][4428] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.70/26] IPv6=[] ContainerID="6ab463930bf20f5ce272ea7e7a2376b0bfc6c47c6a6305b6a0ef65d9e8a6be4d" HandleID="k8s-pod-network.6ab463930bf20f5ce272ea7e7a2376b0bfc6c47c6a6305b6a0ef65d9e8a6be4d" Workload="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-csi--node--driver--4vmcb-eth0" May 27 17:43:11.108598 containerd[1563]: 2025-05-27 17:43:11.057 [INFO][4414] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ab463930bf20f5ce272ea7e7a2376b0bfc6c47c6a6305b6a0ef65d9e8a6be4d" Namespace="calico-system" Pod="csi-node-driver-4vmcb" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-csi--node--driver--4vmcb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-csi--node--driver--4vmcb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bae642d2-ac66-4c94-b401-a475f27bd04d", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 42, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-4vmcb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.28.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali128f8b31a9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:43:11.108598 containerd[1563]: 2025-05-27 17:43:11.058 [INFO][4414] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.70/32] ContainerID="6ab463930bf20f5ce272ea7e7a2376b0bfc6c47c6a6305b6a0ef65d9e8a6be4d" Namespace="calico-system" Pod="csi-node-driver-4vmcb" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-csi--node--driver--4vmcb-eth0" May 27 17:43:11.108598 containerd[1563]: 2025-05-27 17:43:11.058 [INFO][4414] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali128f8b31a9f ContainerID="6ab463930bf20f5ce272ea7e7a2376b0bfc6c47c6a6305b6a0ef65d9e8a6be4d" Namespace="calico-system" Pod="csi-node-driver-4vmcb" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-csi--node--driver--4vmcb-eth0" May 27 17:43:11.108598 containerd[1563]: 2025-05-27 17:43:11.073 [INFO][4414] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ab463930bf20f5ce272ea7e7a2376b0bfc6c47c6a6305b6a0ef65d9e8a6be4d" Namespace="calico-system" Pod="csi-node-driver-4vmcb" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-csi--node--driver--4vmcb-eth0" May 27 17:43:11.108598 containerd[1563]: 2025-05-27 17:43:11.074 [INFO][4414] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ab463930bf20f5ce272ea7e7a2376b0bfc6c47c6a6305b6a0ef65d9e8a6be4d" Namespace="calico-system" Pod="csi-node-driver-4vmcb" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-csi--node--driver--4vmcb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-csi--node--driver--4vmcb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bae642d2-ac66-4c94-b401-a475f27bd04d", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 42, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal", ContainerID:"6ab463930bf20f5ce272ea7e7a2376b0bfc6c47c6a6305b6a0ef65d9e8a6be4d", Pod:"csi-node-driver-4vmcb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.28.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali128f8b31a9f", MAC:"ce:95:66:ab:a6:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:43:11.108598 containerd[1563]: 2025-05-27 17:43:11.101 [INFO][4414] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ab463930bf20f5ce272ea7e7a2376b0bfc6c47c6a6305b6a0ef65d9e8a6be4d" Namespace="calico-system" Pod="csi-node-driver-4vmcb" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-csi--node--driver--4vmcb-eth0" May 27 17:43:11.110616 kubelet[2767]: I0527 17:43:11.110576 2767 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:43:11.191318 containerd[1563]: time="2025-05-27T17:43:11.191088149Z" level=info msg="connecting to shim 6ab463930bf20f5ce272ea7e7a2376b0bfc6c47c6a6305b6a0ef65d9e8a6be4d" address="unix:///run/containerd/s/65b8641bb3d1167643f7889ef7e333e7dc35d1c6100e532788f0ca7847b2d04d" namespace=k8s.io protocol=ttrpc version=3 May 27 17:43:11.270829 systemd[1]: Started cri-containerd-6ab463930bf20f5ce272ea7e7a2376b0bfc6c47c6a6305b6a0ef65d9e8a6be4d.scope - libcontainer container 6ab463930bf20f5ce272ea7e7a2376b0bfc6c47c6a6305b6a0ef65d9e8a6be4d. May 27 17:43:11.354082 containerd[1563]: time="2025-05-27T17:43:11.353885494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4vmcb,Uid:bae642d2-ac66-4c94-b401-a475f27bd04d,Namespace:calico-system,Attempt:0,} returns sandbox id \"6ab463930bf20f5ce272ea7e7a2376b0bfc6c47c6a6305b6a0ef65d9e8a6be4d\"" May 27 17:43:11.365168 systemd-networkd[1436]: calif12cf5827a3: Gained IPv6LL May 27 17:43:11.706233 containerd[1563]: time="2025-05-27T17:43:11.705977716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5t5kd,Uid:6221b71f-4031-4dc0-9788-8de3f6d71ea2,Namespace:kube-system,Attempt:0,}" May 27 17:43:11.710303 containerd[1563]: time="2025-05-27T17:43:11.706378124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kj59m,Uid:f557bed3-e1e9-40fb-854c-2987c8546de4,Namespace:kube-system,Attempt:0,}" May 27 17:43:12.258087 systemd-networkd[1436]: cali734c62289bb: Link UP May 27 17:43:12.259552 systemd-networkd[1436]: cali734c62289bb: Gained carrier May 27 17:43:12.318174 containerd[1563]: 2025-05-27 17:43:11.949 [INFO][4527] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:43:12.318174 containerd[1563]: 2025-05-27 17:43:11.998 [INFO][4527] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--5t5kd-eth0 coredns-668d6bf9bc- kube-system 6221b71f-4031-4dc0-9788-8de3f6d71ea2 830 0 2025-05-27 17:42:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal coredns-668d6bf9bc-5t5kd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali734c62289bb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="07b085176bedb8dff368d45f71d4328ddd6597ead2ae92a4ce855594480021d2" Namespace="kube-system" Pod="coredns-668d6bf9bc-5t5kd" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--5t5kd-" May 27 17:43:12.318174 containerd[1563]: 2025-05-27 17:43:12.000 [INFO][4527] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="07b085176bedb8dff368d45f71d4328ddd6597ead2ae92a4ce855594480021d2" Namespace="kube-system" Pod="coredns-668d6bf9bc-5t5kd" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--5t5kd-eth0" May 27 17:43:12.318174 containerd[1563]: 2025-05-27 17:43:12.144 [INFO][4560] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07b085176bedb8dff368d45f71d4328ddd6597ead2ae92a4ce855594480021d2" HandleID="k8s-pod-network.07b085176bedb8dff368d45f71d4328ddd6597ead2ae92a4ce855594480021d2" Workload="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--5t5kd-eth0" May 27 17:43:12.318174 containerd[1563]: 2025-05-27 17:43:12.144 [INFO][4560] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="07b085176bedb8dff368d45f71d4328ddd6597ead2ae92a4ce855594480021d2" HandleID="k8s-pod-network.07b085176bedb8dff368d45f71d4328ddd6597ead2ae92a4ce855594480021d2" Workload="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--5t5kd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f280), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal", "pod":"coredns-668d6bf9bc-5t5kd", "timestamp":"2025-05-27 17:43:12.144452804 +0000 UTC"}, Hostname:"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:43:12.318174 containerd[1563]: 2025-05-27 17:43:12.144 [INFO][4560] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:43:12.318174 containerd[1563]: 2025-05-27 17:43:12.144 [INFO][4560] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:43:12.318174 containerd[1563]: 2025-05-27 17:43:12.144 [INFO][4560] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal' May 27 17:43:12.318174 containerd[1563]: 2025-05-27 17:43:12.172 [INFO][4560] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.07b085176bedb8dff368d45f71d4328ddd6597ead2ae92a4ce855594480021d2" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:12.318174 containerd[1563]: 2025-05-27 17:43:12.181 [INFO][4560] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:12.318174 containerd[1563]: 2025-05-27 17:43:12.193 [INFO][4560] ipam/ipam.go 511: Trying affinity for 192.168.28.64/26 host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:12.318174 containerd[1563]: 2025-05-27 17:43:12.202 [INFO][4560] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.64/26 host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:12.318174 containerd[1563]: 2025-05-27 17:43:12.208 [INFO][4560] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.64/26 host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:12.318174 containerd[1563]: 2025-05-27 17:43:12.208 [INFO][4560] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.64/26 handle="k8s-pod-network.07b085176bedb8dff368d45f71d4328ddd6597ead2ae92a4ce855594480021d2" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:12.318174 containerd[1563]: 2025-05-27 17:43:12.212 [INFO][4560] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.07b085176bedb8dff368d45f71d4328ddd6597ead2ae92a4ce855594480021d2 May 27 17:43:12.318174 containerd[1563]: 2025-05-27 17:43:12.220 [INFO][4560] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.64/26 handle="k8s-pod-network.07b085176bedb8dff368d45f71d4328ddd6597ead2ae92a4ce855594480021d2" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:12.318174 containerd[1563]: 2025-05-27 17:43:12.234 [INFO][4560] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.71/26] block=192.168.28.64/26 handle="k8s-pod-network.07b085176bedb8dff368d45f71d4328ddd6597ead2ae92a4ce855594480021d2" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:12.318174 containerd[1563]: 2025-05-27 17:43:12.236 [INFO][4560] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.71/26] handle="k8s-pod-network.07b085176bedb8dff368d45f71d4328ddd6597ead2ae92a4ce855594480021d2" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:12.318174 containerd[1563]: 2025-05-27 17:43:12.236 [INFO][4560] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:43:12.318174 containerd[1563]: 2025-05-27 17:43:12.236 [INFO][4560] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.71/26] IPv6=[] ContainerID="07b085176bedb8dff368d45f71d4328ddd6597ead2ae92a4ce855594480021d2" HandleID="k8s-pod-network.07b085176bedb8dff368d45f71d4328ddd6597ead2ae92a4ce855594480021d2" Workload="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--5t5kd-eth0" May 27 17:43:12.322963 containerd[1563]: 2025-05-27 17:43:12.247 [INFO][4527] cni-plugin/k8s.go 418: Populated endpoint ContainerID="07b085176bedb8dff368d45f71d4328ddd6597ead2ae92a4ce855594480021d2" Namespace="kube-system" Pod="coredns-668d6bf9bc-5t5kd" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--5t5kd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--5t5kd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6221b71f-4031-4dc0-9788-8de3f6d71ea2", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 42, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-668d6bf9bc-5t5kd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali734c62289bb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:43:12.322963 containerd[1563]: 2025-05-27 17:43:12.249 [INFO][4527] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.71/32] ContainerID="07b085176bedb8dff368d45f71d4328ddd6597ead2ae92a4ce855594480021d2" Namespace="kube-system" Pod="coredns-668d6bf9bc-5t5kd" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--5t5kd-eth0" May 27 17:43:12.322963 containerd[1563]: 2025-05-27 17:43:12.249 [INFO][4527] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali734c62289bb ContainerID="07b085176bedb8dff368d45f71d4328ddd6597ead2ae92a4ce855594480021d2" Namespace="kube-system" Pod="coredns-668d6bf9bc-5t5kd" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--5t5kd-eth0" May 27 17:43:12.322963 containerd[1563]: 2025-05-27 17:43:12.261 [INFO][4527] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07b085176bedb8dff368d45f71d4328ddd6597ead2ae92a4ce855594480021d2" Namespace="kube-system" Pod="coredns-668d6bf9bc-5t5kd" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--5t5kd-eth0" May 27 17:43:12.322963 containerd[1563]: 2025-05-27 17:43:12.262 [INFO][4527] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="07b085176bedb8dff368d45f71d4328ddd6597ead2ae92a4ce855594480021d2" Namespace="kube-system" Pod="coredns-668d6bf9bc-5t5kd" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--5t5kd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--5t5kd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6221b71f-4031-4dc0-9788-8de3f6d71ea2", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 42, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal", ContainerID:"07b085176bedb8dff368d45f71d4328ddd6597ead2ae92a4ce855594480021d2", Pod:"coredns-668d6bf9bc-5t5kd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali734c62289bb", MAC:"4e:c1:40:2b:28:95", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:43:12.322963 containerd[1563]: 2025-05-27 17:43:12.305 [INFO][4527] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="07b085176bedb8dff368d45f71d4328ddd6597ead2ae92a4ce855594480021d2" Namespace="kube-system" Pod="coredns-668d6bf9bc-5t5kd" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--5t5kd-eth0" May 27 17:43:12.403955 containerd[1563]: time="2025-05-27T17:43:12.403074284Z" level=info msg="connecting to shim 07b085176bedb8dff368d45f71d4328ddd6597ead2ae92a4ce855594480021d2" address="unix:///run/containerd/s/9889fc4eabe945fb13102739fb6c5f746392d880ebcf0092da11f5627eecce3c" namespace=k8s.io protocol=ttrpc version=3 May 27 17:43:12.441263 systemd-networkd[1436]: cali40e4c2d0049: Link UP May 27 17:43:12.462249 systemd-networkd[1436]: cali40e4c2d0049: Gained carrier May 27 17:43:12.485614 systemd[1]: Started cri-containerd-07b085176bedb8dff368d45f71d4328ddd6597ead2ae92a4ce855594480021d2.scope - libcontainer container 07b085176bedb8dff368d45f71d4328ddd6597ead2ae92a4ce855594480021d2. May 27 17:43:12.518052 systemd-networkd[1436]: cali128f8b31a9f: Gained IPv6LL May 27 17:43:12.520763 containerd[1563]: 2025-05-27 17:43:11.953 [INFO][4536] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:43:12.520763 containerd[1563]: 2025-05-27 17:43:12.008 [INFO][4536] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--kj59m-eth0 coredns-668d6bf9bc- kube-system f557bed3-e1e9-40fb-854c-2987c8546de4 831 0 2025-05-27 17:42:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal coredns-668d6bf9bc-kj59m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali40e4c2d0049 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="62e88303951548de42faa431a84fcb7f01aed9f34e841929ae1f42f6029428e6" Namespace="kube-system" Pod="coredns-668d6bf9bc-kj59m" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--kj59m-" May 27 17:43:12.520763 containerd[1563]: 2025-05-27 17:43:12.009 [INFO][4536] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="62e88303951548de42faa431a84fcb7f01aed9f34e841929ae1f42f6029428e6" Namespace="kube-system" Pod="coredns-668d6bf9bc-kj59m" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--kj59m-eth0" May 27 17:43:12.520763 containerd[1563]: 2025-05-27 17:43:12.144 [INFO][4562] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62e88303951548de42faa431a84fcb7f01aed9f34e841929ae1f42f6029428e6" HandleID="k8s-pod-network.62e88303951548de42faa431a84fcb7f01aed9f34e841929ae1f42f6029428e6" Workload="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--kj59m-eth0" May 27 17:43:12.520763 containerd[1563]: 2025-05-27 17:43:12.146 [INFO][4562] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="62e88303951548de42faa431a84fcb7f01aed9f34e841929ae1f42f6029428e6" HandleID="k8s-pod-network.62e88303951548de42faa431a84fcb7f01aed9f34e841929ae1f42f6029428e6" Workload="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--kj59m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e390), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal", "pod":"coredns-668d6bf9bc-kj59m", "timestamp":"2025-05-27 17:43:12.144332909 +0000 UTC"}, Hostname:"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:43:12.520763 containerd[1563]: 2025-05-27 17:43:12.147 [INFO][4562] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:43:12.520763 containerd[1563]: 2025-05-27 17:43:12.236 [INFO][4562] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:43:12.520763 containerd[1563]: 2025-05-27 17:43:12.236 [INFO][4562] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal' May 27 17:43:12.520763 containerd[1563]: 2025-05-27 17:43:12.287 [INFO][4562] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.62e88303951548de42faa431a84fcb7f01aed9f34e841929ae1f42f6029428e6" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:12.520763 containerd[1563]: 2025-05-27 17:43:12.311 [INFO][4562] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:12.520763 containerd[1563]: 2025-05-27 17:43:12.333 [INFO][4562] ipam/ipam.go 511: Trying affinity for 192.168.28.64/26 host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:12.520763 containerd[1563]: 2025-05-27 17:43:12.339 [INFO][4562] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.64/26 host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:12.520763 containerd[1563]: 2025-05-27 17:43:12.348 [INFO][4562] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.64/26 host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:12.520763 containerd[1563]: 2025-05-27 17:43:12.348 [INFO][4562] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.64/26 handle="k8s-pod-network.62e88303951548de42faa431a84fcb7f01aed9f34e841929ae1f42f6029428e6" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:12.520763 containerd[1563]: 2025-05-27 17:43:12.353 [INFO][4562] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.62e88303951548de42faa431a84fcb7f01aed9f34e841929ae1f42f6029428e6 May 27 17:43:12.520763 containerd[1563]: 2025-05-27 17:43:12.362 [INFO][4562] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.64/26 handle="k8s-pod-network.62e88303951548de42faa431a84fcb7f01aed9f34e841929ae1f42f6029428e6" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:12.520763 containerd[1563]: 2025-05-27 17:43:12.377 [INFO][4562] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.72/26] block=192.168.28.64/26 handle="k8s-pod-network.62e88303951548de42faa431a84fcb7f01aed9f34e841929ae1f42f6029428e6" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:12.520763 containerd[1563]: 2025-05-27 17:43:12.377 [INFO][4562] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.72/26] handle="k8s-pod-network.62e88303951548de42faa431a84fcb7f01aed9f34e841929ae1f42f6029428e6" host="ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal" May 27 17:43:12.520763 containerd[1563]: 2025-05-27 17:43:12.378 [INFO][4562] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:43:12.520763 containerd[1563]: 2025-05-27 17:43:12.379 [INFO][4562] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.72/26] IPv6=[] ContainerID="62e88303951548de42faa431a84fcb7f01aed9f34e841929ae1f42f6029428e6" HandleID="k8s-pod-network.62e88303951548de42faa431a84fcb7f01aed9f34e841929ae1f42f6029428e6" Workload="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--kj59m-eth0" May 27 17:43:12.522101 containerd[1563]: 2025-05-27 17:43:12.400 [INFO][4536] cni-plugin/k8s.go 418: Populated endpoint ContainerID="62e88303951548de42faa431a84fcb7f01aed9f34e841929ae1f42f6029428e6" Namespace="kube-system" Pod="coredns-668d6bf9bc-kj59m" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--kj59m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--kj59m-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f557bed3-e1e9-40fb-854c-2987c8546de4", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 42, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-668d6bf9bc-kj59m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40e4c2d0049", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:43:12.522101 containerd[1563]: 2025-05-27 17:43:12.412 [INFO][4536] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.72/32] ContainerID="62e88303951548de42faa431a84fcb7f01aed9f34e841929ae1f42f6029428e6" Namespace="kube-system" Pod="coredns-668d6bf9bc-kj59m" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--kj59m-eth0" May 27 17:43:12.522101 containerd[1563]: 2025-05-27 17:43:12.412 [INFO][4536] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali40e4c2d0049 ContainerID="62e88303951548de42faa431a84fcb7f01aed9f34e841929ae1f42f6029428e6" Namespace="kube-system" Pod="coredns-668d6bf9bc-kj59m" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--kj59m-eth0" May 27 17:43:12.522101 containerd[1563]: 2025-05-27 17:43:12.470 [INFO][4536] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62e88303951548de42faa431a84fcb7f01aed9f34e841929ae1f42f6029428e6" Namespace="kube-system" Pod="coredns-668d6bf9bc-kj59m" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--kj59m-eth0" May 27 17:43:12.522101 containerd[1563]: 2025-05-27 17:43:12.473 [INFO][4536] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="62e88303951548de42faa431a84fcb7f01aed9f34e841929ae1f42f6029428e6" Namespace="kube-system" Pod="coredns-668d6bf9bc-kj59m" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--kj59m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--kj59m-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f557bed3-e1e9-40fb-854c-2987c8546de4", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 42, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-0d28c620010305ab17c8.c.flatcar-212911.internal", ContainerID:"62e88303951548de42faa431a84fcb7f01aed9f34e841929ae1f42f6029428e6", Pod:"coredns-668d6bf9bc-kj59m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40e4c2d0049", MAC:"62:f4:7b:22:24:74", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:43:12.522101 containerd[1563]: 2025-05-27 17:43:12.508 [INFO][4536] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="62e88303951548de42faa431a84fcb7f01aed9f34e841929ae1f42f6029428e6" Namespace="kube-system" Pod="coredns-668d6bf9bc-kj59m" WorkloadEndpoint="ci--4344--0--0--0d28c620010305ab17c8.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--kj59m-eth0" May 27 17:43:12.608330 containerd[1563]: time="2025-05-27T17:43:12.607851607Z" level=info msg="connecting to shim 62e88303951548de42faa431a84fcb7f01aed9f34e841929ae1f42f6029428e6" address="unix:///run/containerd/s/0e622a5322935e1838fa2e00757061e653289a8b0d84c807a26e49f820c37054" namespace=k8s.io protocol=ttrpc version=3 May 27 17:43:12.701546 systemd[1]: Started cri-containerd-62e88303951548de42faa431a84fcb7f01aed9f34e841929ae1f42f6029428e6.scope - libcontainer container 62e88303951548de42faa431a84fcb7f01aed9f34e841929ae1f42f6029428e6. May 27 17:43:12.790396 containerd[1563]: time="2025-05-27T17:43:12.789847710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5t5kd,Uid:6221b71f-4031-4dc0-9788-8de3f6d71ea2,Namespace:kube-system,Attempt:0,} returns sandbox id \"07b085176bedb8dff368d45f71d4328ddd6597ead2ae92a4ce855594480021d2\"" May 27 17:43:12.801310 containerd[1563]: time="2025-05-27T17:43:12.801011229Z" level=info msg="CreateContainer within sandbox \"07b085176bedb8dff368d45f71d4328ddd6597ead2ae92a4ce855594480021d2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 17:43:12.833486 containerd[1563]: time="2025-05-27T17:43:12.832451224Z" level=info msg="Container cceaa280ee574af121724fe66993f7f9420dbc7514431d2c51faaa81ba7e46c8: CDI devices from CRI Config.CDIDevices: []" May 27 17:43:12.864467 containerd[1563]: time="2025-05-27T17:43:12.864415407Z" level=info msg="CreateContainer within sandbox \"07b085176bedb8dff368d45f71d4328ddd6597ead2ae92a4ce855594480021d2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cceaa280ee574af121724fe66993f7f9420dbc7514431d2c51faaa81ba7e46c8\"" May 27 17:43:12.869356 containerd[1563]: time="2025-05-27T17:43:12.868919958Z" level=info msg="StartContainer for \"cceaa280ee574af121724fe66993f7f9420dbc7514431d2c51faaa81ba7e46c8\"" May 27 17:43:12.874812 containerd[1563]: time="2025-05-27T17:43:12.874730875Z" level=info msg="connecting to shim cceaa280ee574af121724fe66993f7f9420dbc7514431d2c51faaa81ba7e46c8" address="unix:///run/containerd/s/9889fc4eabe945fb13102739fb6c5f746392d880ebcf0092da11f5627eecce3c" protocol=ttrpc version=3 May 27 17:43:12.915811 containerd[1563]: time="2025-05-27T17:43:12.915439934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kj59m,Uid:f557bed3-e1e9-40fb-854c-2987c8546de4,Namespace:kube-system,Attempt:0,} returns sandbox id \"62e88303951548de42faa431a84fcb7f01aed9f34e841929ae1f42f6029428e6\"" May 27 17:43:12.923714 containerd[1563]: time="2025-05-27T17:43:12.923664121Z" level=info msg="CreateContainer within sandbox \"62e88303951548de42faa431a84fcb7f01aed9f34e841929ae1f42f6029428e6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 17:43:12.961972 systemd[1]: Started cri-containerd-cceaa280ee574af121724fe66993f7f9420dbc7514431d2c51faaa81ba7e46c8.scope - libcontainer container cceaa280ee574af121724fe66993f7f9420dbc7514431d2c51faaa81ba7e46c8. May 27 17:43:12.968140 containerd[1563]: time="2025-05-27T17:43:12.968074923Z" level=info msg="Container 71ab84acf415f23b5fd69d1a9b2e1c152675cd1a1819f0e4b5c7ab0ae392b714: CDI devices from CRI Config.CDIDevices: []" May 27 17:43:12.991483 containerd[1563]: time="2025-05-27T17:43:12.991429607Z" level=info msg="CreateContainer within sandbox \"62e88303951548de42faa431a84fcb7f01aed9f34e841929ae1f42f6029428e6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"71ab84acf415f23b5fd69d1a9b2e1c152675cd1a1819f0e4b5c7ab0ae392b714\"" May 27 17:43:12.995248 containerd[1563]: time="2025-05-27T17:43:12.995212282Z" level=info msg="StartContainer for \"71ab84acf415f23b5fd69d1a9b2e1c152675cd1a1819f0e4b5c7ab0ae392b714\"" May 27 17:43:13.002841 containerd[1563]: time="2025-05-27T17:43:13.002780687Z" level=info msg="connecting to shim 71ab84acf415f23b5fd69d1a9b2e1c152675cd1a1819f0e4b5c7ab0ae392b714" address="unix:///run/containerd/s/0e622a5322935e1838fa2e00757061e653289a8b0d84c807a26e49f820c37054" protocol=ttrpc version=3 May 27 17:43:13.068685 systemd[1]: Started cri-containerd-71ab84acf415f23b5fd69d1a9b2e1c152675cd1a1819f0e4b5c7ab0ae392b714.scope - libcontainer container 71ab84acf415f23b5fd69d1a9b2e1c152675cd1a1819f0e4b5c7ab0ae392b714. May 27 17:43:13.138831 containerd[1563]: time="2025-05-27T17:43:13.138772061Z" level=info msg="StartContainer for \"cceaa280ee574af121724fe66993f7f9420dbc7514431d2c51faaa81ba7e46c8\" returns successfully" May 27 17:43:13.195618 containerd[1563]: time="2025-05-27T17:43:13.195568133Z" level=info msg="StartContainer for \"71ab84acf415f23b5fd69d1a9b2e1c152675cd1a1819f0e4b5c7ab0ae392b714\" returns successfully" May 27 17:43:13.346960 containerd[1563]: time="2025-05-27T17:43:13.346597260Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:43:13.349806 containerd[1563]: time="2025-05-27T17:43:13.349719854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 27 17:43:13.352059 containerd[1563]: time="2025-05-27T17:43:13.351906284Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:43:13.357113 containerd[1563]: time="2025-05-27T17:43:13.357053230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:43:13.359384 containerd[1563]: time="2025-05-27T17:43:13.359259412Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 5.018782763s" May 27 17:43:13.359493 containerd[1563]: time="2025-05-27T17:43:13.359412390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 17:43:13.364294 containerd[1563]: time="2025-05-27T17:43:13.364052759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 17:43:13.366957 containerd[1563]: time="2025-05-27T17:43:13.366846141Z" level=info msg="CreateContainer within sandbox \"9926f125d439e2073e54e71dc0fc1da206b31e3310e9949ae0e01f327623737f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 17:43:13.375899 containerd[1563]: time="2025-05-27T17:43:13.375845396Z" level=info msg="Container 566df6106353045074d84d8c28b07f1f902420dafbee4cf75ff4884391b5dfdf: CDI devices from CRI Config.CDIDevices: []" May 27 17:43:13.401871 containerd[1563]: time="2025-05-27T17:43:13.401812721Z" level=info msg="CreateContainer within sandbox \"9926f125d439e2073e54e71dc0fc1da206b31e3310e9949ae0e01f327623737f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"566df6106353045074d84d8c28b07f1f902420dafbee4cf75ff4884391b5dfdf\"" May 27 17:43:13.403219 containerd[1563]: time="2025-05-27T17:43:13.403173516Z" level=info msg="StartContainer for \"566df6106353045074d84d8c28b07f1f902420dafbee4cf75ff4884391b5dfdf\"" May 27 17:43:13.407116 containerd[1563]: time="2025-05-27T17:43:13.407043765Z" level=info msg="connecting to shim 566df6106353045074d84d8c28b07f1f902420dafbee4cf75ff4884391b5dfdf" address="unix:///run/containerd/s/43a124965c6472e2968287445bfb8e0a58547a6f982616f33c89b7eccc073216" protocol=ttrpc version=3 May 27 17:43:13.466796 systemd[1]: Started cri-containerd-566df6106353045074d84d8c28b07f1f902420dafbee4cf75ff4884391b5dfdf.scope - libcontainer container 566df6106353045074d84d8c28b07f1f902420dafbee4cf75ff4884391b5dfdf. May 27 17:43:13.606692 containerd[1563]: time="2025-05-27T17:43:13.606546015Z" level=info msg="StartContainer for \"566df6106353045074d84d8c28b07f1f902420dafbee4cf75ff4884391b5dfdf\" returns successfully" May 27 17:43:13.680349 containerd[1563]: time="2025-05-27T17:43:13.679231054Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:43:13.682973 containerd[1563]: time="2025-05-27T17:43:13.682708131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 27 17:43:13.688608 containerd[1563]: time="2025-05-27T17:43:13.688543157Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 324.060487ms" May 27 17:43:13.688735 containerd[1563]: time="2025-05-27T17:43:13.688611596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 17:43:13.691092 containerd[1563]: time="2025-05-27T17:43:13.690116394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 17:43:13.693731 containerd[1563]: time="2025-05-27T17:43:13.693678698Z" level=info msg="CreateContainer within sandbox \"57fd234fe359fe0ade5bf6257a5d86430bfb51caf2b1e4c8fbee69e5c531f3fa\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 17:43:13.733556 containerd[1563]: time="2025-05-27T17:43:13.731783135Z" level=info msg="Container 92bf940571fbe425cb32bf01ab414997140d9eda6a1f97aee29a4126edd04b93: CDI devices from CRI Config.CDIDevices: []" May 27 17:43:13.755574 containerd[1563]: time="2025-05-27T17:43:13.755345268Z" level=info msg="CreateContainer within sandbox \"57fd234fe359fe0ade5bf6257a5d86430bfb51caf2b1e4c8fbee69e5c531f3fa\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"92bf940571fbe425cb32bf01ab414997140d9eda6a1f97aee29a4126edd04b93\"" May 27 17:43:13.757708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2069015427.mount: Deactivated successfully. May 27 17:43:13.764323 containerd[1563]: time="2025-05-27T17:43:13.761874163Z" level=info msg="StartContainer for \"92bf940571fbe425cb32bf01ab414997140d9eda6a1f97aee29a4126edd04b93\"" May 27 17:43:13.768674 containerd[1563]: time="2025-05-27T17:43:13.768636200Z" level=info msg="connecting to shim 92bf940571fbe425cb32bf01ab414997140d9eda6a1f97aee29a4126edd04b93" address="unix:///run/containerd/s/d263665d4010427e920dfbc5765c97a3688b4094a7faf67c31fae4d5de5e9952" protocol=ttrpc version=3 May 27 17:43:13.836111 systemd[1]: Started cri-containerd-92bf940571fbe425cb32bf01ab414997140d9eda6a1f97aee29a4126edd04b93.scope - libcontainer container 92bf940571fbe425cb32bf01ab414997140d9eda6a1f97aee29a4126edd04b93. May 27 17:43:13.937755 systemd-networkd[1436]: vxlan.calico: Link UP May 27 17:43:13.937826 systemd-networkd[1436]: vxlan.calico: Gained carrier May 27 17:43:13.988536 systemd-networkd[1436]: cali734c62289bb: Gained IPv6LL May 27 17:43:14.063320 kubelet[2767]: I0527 17:43:14.061445 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7897c49469-b54nx" podStartSLOduration=28.039253816 podStartE2EDuration="33.061417025s" podCreationTimestamp="2025-05-27 17:42:41 +0000 UTC" firstStartedPulling="2025-05-27 17:43:08.339666186 +0000 UTC m=+44.789130624" lastFinishedPulling="2025-05-27 17:43:13.361829414 +0000 UTC m=+49.811293833" observedRunningTime="2025-05-27 17:43:14.058007846 +0000 UTC m=+50.507472288" watchObservedRunningTime="2025-05-27 17:43:14.061417025 +0000 UTC m=+50.510881464" May 27 17:43:14.094478 kubelet[2767]: I0527 17:43:14.094412 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-kj59m" podStartSLOduration=44.09438605 podStartE2EDuration="44.09438605s" podCreationTimestamp="2025-05-27 17:42:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:43:14.090344032 +0000 UTC m=+50.539808473" watchObservedRunningTime="2025-05-27 17:43:14.09438605 +0000 UTC m=+50.543850482" May 27 17:43:14.117734 systemd-networkd[1436]: cali40e4c2d0049: Gained IPv6LL May 27 17:43:14.144969 containerd[1563]: time="2025-05-27T17:43:14.144916901Z" level=info msg="StartContainer for \"92bf940571fbe425cb32bf01ab414997140d9eda6a1f97aee29a4126edd04b93\" returns successfully" May 27 17:43:14.145557 containerd[1563]: time="2025-05-27T17:43:14.145483445Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:43:14.148015 containerd[1563]: time="2025-05-27T17:43:14.147923384Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:43:14.148504 containerd[1563]: time="2025-05-27T17:43:14.148464662Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 17:43:14.148883 kubelet[2767]: E0527 17:43:14.148802 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:43:14.151351 kubelet[2767]: E0527 17:43:14.148950 2767 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:43:14.151460 containerd[1563]: time="2025-05-27T17:43:14.150657254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 27 17:43:14.151581 kubelet[2767]: E0527 17:43:14.149240 2767 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-828bq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-hnfkh_calico-system(17a31152-9656-4e7b-b59d-6595424c4d7e): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:43:14.153688 kubelet[2767]: E0527 17:43:14.153611 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hnfkh" podUID="17a31152-9656-4e7b-b59d-6595424c4d7e" May 27 17:43:15.061815 kubelet[2767]: I0527 17:43:15.060545 2767 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:43:15.066755 kubelet[2767]: E0527 17:43:15.066702 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hnfkh" podUID="17a31152-9656-4e7b-b59d-6595424c4d7e" May 27 17:43:15.091312 kubelet[2767]: I0527 17:43:15.090103 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5t5kd" podStartSLOduration=45.090080136 podStartE2EDuration="45.090080136s" podCreationTimestamp="2025-05-27 17:42:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:43:14.129018769 +0000 UTC m=+50.578483209" watchObservedRunningTime="2025-05-27 17:43:15.090080136 +0000 UTC m=+51.539544567" May 27 17:43:15.119309 kubelet[2767]: I0527 17:43:15.116457 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7897c49469-5lbz5" podStartSLOduration=28.810284056 podStartE2EDuration="34.116430632s" podCreationTimestamp="2025-05-27 17:42:41 +0000 UTC" firstStartedPulling="2025-05-27 17:43:08.384006619 +0000 UTC m=+44.833471040" lastFinishedPulling="2025-05-27 17:43:13.690153185 +0000 UTC m=+50.139617616" observedRunningTime="2025-05-27 17:43:15.115232489 +0000 UTC m=+51.564696945" watchObservedRunningTime="2025-05-27 17:43:15.116430632 +0000 UTC m=+51.565895073" May 27 17:43:15.460546 systemd-networkd[1436]: vxlan.calico: Gained IPv6LL May 27 17:43:16.070793 kubelet[2767]: I0527 17:43:16.070760 2767 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:43:18.150330 containerd[1563]: time="2025-05-27T17:43:18.149296975Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:43:18.150330 containerd[1563]: time="2025-05-27T17:43:18.150263044Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 27 17:43:18.152176 containerd[1563]: time="2025-05-27T17:43:18.152132558Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:43:18.155765 containerd[1563]: time="2025-05-27T17:43:18.155729192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:43:18.157848 containerd[1563]: time="2025-05-27T17:43:18.157526093Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 4.006832634s" May 27 17:43:18.157848 containerd[1563]: time="2025-05-27T17:43:18.157574488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 27 17:43:18.162469 containerd[1563]: time="2025-05-27T17:43:18.161722953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 27 17:43:18.211354 containerd[1563]: time="2025-05-27T17:43:18.210234837Z" level=info msg="CreateContainer within sandbox \"abf575551316cfd29049147c6cd172a57973b88f780e5d0a0d3deb331063c7e2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 27 17:43:18.253307 containerd[1563]: time="2025-05-27T17:43:18.252496018Z" level=info msg="Container 1f059589bfbc162d5b1bc78917d9e9cde4d9afd48c27917638ac7126c9167e9b: CDI devices from CRI Config.CDIDevices: []" May 27 17:43:18.271064 containerd[1563]: time="2025-05-27T17:43:18.271007638Z" level=info msg="CreateContainer within sandbox \"abf575551316cfd29049147c6cd172a57973b88f780e5d0a0d3deb331063c7e2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1f059589bfbc162d5b1bc78917d9e9cde4d9afd48c27917638ac7126c9167e9b\"" May 27 17:43:18.272697 containerd[1563]: time="2025-05-27T17:43:18.272563159Z" level=info msg="StartContainer for \"1f059589bfbc162d5b1bc78917d9e9cde4d9afd48c27917638ac7126c9167e9b\"" May 27 17:43:18.276820 containerd[1563]: time="2025-05-27T17:43:18.276746422Z" level=info msg="connecting to shim 1f059589bfbc162d5b1bc78917d9e9cde4d9afd48c27917638ac7126c9167e9b" address="unix:///run/containerd/s/6e90c95936d3524ad070ade01ce27aa4424561787cb4081a9ad5b211eca50f8b" protocol=ttrpc version=3 May 27 17:43:18.343556 systemd[1]: Started cri-containerd-1f059589bfbc162d5b1bc78917d9e9cde4d9afd48c27917638ac7126c9167e9b.scope - libcontainer container 1f059589bfbc162d5b1bc78917d9e9cde4d9afd48c27917638ac7126c9167e9b. May 27 17:43:18.381451 ntpd[1537]: Listen normally on 7 vxlan.calico 192.168.28.64:123 May 27 17:43:18.381591 ntpd[1537]: Listen normally on 8 caliae188c70562 [fe80::ecee:eeff:feee:eeee%4]:123 May 27 17:43:18.382095 ntpd[1537]: 27 May 17:43:18 ntpd[1537]: Listen normally on 7 vxlan.calico 192.168.28.64:123 May 27 17:43:18.382095 ntpd[1537]: 27 May 17:43:18 ntpd[1537]: Listen normally on 8 caliae188c70562 [fe80::ecee:eeff:feee:eeee%4]:123 May 27 17:43:18.382095 ntpd[1537]: 27 May 17:43:18 ntpd[1537]: Listen normally on 9 calid715f0a3252 [fe80::ecee:eeff:feee:eeee%5]:123 May 27 17:43:18.382095 ntpd[1537]: 27 May 17:43:18 ntpd[1537]: Listen normally on 10 cali4e0e1b7a481 [fe80::ecee:eeff:feee:eeee%6]:123 May 27 17:43:18.382095 ntpd[1537]: 27 May 17:43:18 ntpd[1537]: Listen normally on 11 cali62bdeab4ae8 [fe80::ecee:eeff:feee:eeee%7]:123 May 27 17:43:18.382095 ntpd[1537]: 27 May 17:43:18 ntpd[1537]: Listen normally on 12 calif12cf5827a3 [fe80::ecee:eeff:feee:eeee%8]:123 May 27 17:43:18.382095 ntpd[1537]: 27 May 17:43:18 ntpd[1537]: Listen normally on 13 cali128f8b31a9f [fe80::ecee:eeff:feee:eeee%9]:123 May 27 17:43:18.382095 ntpd[1537]: 27 May 17:43:18 ntpd[1537]: Listen normally on 14 cali734c62289bb [fe80::ecee:eeff:feee:eeee%10]:123 May 27 17:43:18.382095 ntpd[1537]: 27 May 17:43:18 ntpd[1537]: Listen normally on 15 cali40e4c2d0049 [fe80::ecee:eeff:feee:eeee%11]:123 May 27 17:43:18.382095 ntpd[1537]: 27 May 17:43:18 ntpd[1537]: Listen normally on 16 vxlan.calico [fe80::64ec:38ff:fe63:c022%12]:123 May 27 17:43:18.381676 ntpd[1537]: Listen normally on 9 calid715f0a3252 [fe80::ecee:eeff:feee:eeee%5]:123 May 27 17:43:18.381736 ntpd[1537]: Listen normally on 10 cali4e0e1b7a481 [fe80::ecee:eeff:feee:eeee%6]:123 May 27 17:43:18.381803 ntpd[1537]: Listen normally on 11 cali62bdeab4ae8 [fe80::ecee:eeff:feee:eeee%7]:123 May 27 17:43:18.381857 ntpd[1537]: Listen normally on 12 calif12cf5827a3 [fe80::ecee:eeff:feee:eeee%8]:123 May 27 17:43:18.381911 ntpd[1537]: Listen normally on 13 cali128f8b31a9f [fe80::ecee:eeff:feee:eeee%9]:123 May 27 17:43:18.381963 ntpd[1537]: Listen normally on 14 cali734c62289bb [fe80::ecee:eeff:feee:eeee%10]:123 May 27 17:43:18.382014 ntpd[1537]: Listen normally on 15 cali40e4c2d0049 [fe80::ecee:eeff:feee:eeee%11]:123 May 27 17:43:18.382084 ntpd[1537]: Listen normally on 16 vxlan.calico [fe80::64ec:38ff:fe63:c022%12]:123 May 27 17:43:18.548307 containerd[1563]: time="2025-05-27T17:43:18.546907545Z" level=info msg="StartContainer for \"1f059589bfbc162d5b1bc78917d9e9cde4d9afd48c27917638ac7126c9167e9b\" returns successfully" May 27 17:43:19.334354 containerd[1563]: time="2025-05-27T17:43:19.334294080Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f059589bfbc162d5b1bc78917d9e9cde4d9afd48c27917638ac7126c9167e9b\" id:\"93b190e064d58e14ef9017668ae10b665723a13d3cfaae1a1ace4aa14ffe1755\" pid:4988 exited_at:{seconds:1748367799 nanos:329046818}" May 27 17:43:19.372398 kubelet[2767]: I0527 17:43:19.371723 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-58b97d7dbc-z4gdm" podStartSLOduration=25.345017742 podStartE2EDuration="33.37169735s" podCreationTimestamp="2025-05-27 17:42:46 +0000 UTC" firstStartedPulling="2025-05-27 17:43:10.134498364 +0000 UTC m=+46.583962784" lastFinishedPulling="2025-05-27 17:43:18.161177961 +0000 UTC m=+54.610642392" observedRunningTime="2025-05-27 17:43:19.11448379 +0000 UTC m=+55.563948246" watchObservedRunningTime="2025-05-27 17:43:19.37169735 +0000 UTC m=+55.821161792" May 27 17:43:19.516013 containerd[1563]: time="2025-05-27T17:43:19.515952065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:43:19.519372 containerd[1563]: time="2025-05-27T17:43:19.519314659Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 27 17:43:19.522303 containerd[1563]: time="2025-05-27T17:43:19.521461966Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:43:19.526804 containerd[1563]: time="2025-05-27T17:43:19.526746173Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 1.363904093s" May 27 17:43:19.526939 containerd[1563]: time="2025-05-27T17:43:19.526806723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 27 17:43:19.527014 containerd[1563]: time="2025-05-27T17:43:19.526967697Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:43:19.532985 containerd[1563]: time="2025-05-27T17:43:19.532647137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 17:43:19.533310 containerd[1563]: time="2025-05-27T17:43:19.533267777Z" level=info msg="CreateContainer within sandbox \"6ab463930bf20f5ce272ea7e7a2376b0bfc6c47c6a6305b6a0ef65d9e8a6be4d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 27 17:43:19.549822 containerd[1563]: time="2025-05-27T17:43:19.549750253Z" level=info msg="Container a2ef679b6ab7f7540ecf99bb10d1a7801fba4fc3bcbfef414a74d26d2410edd1: CDI devices from CRI Config.CDIDevices: []" May 27 17:43:19.569929 containerd[1563]: time="2025-05-27T17:43:19.569872772Z" level=info msg="CreateContainer within sandbox \"6ab463930bf20f5ce272ea7e7a2376b0bfc6c47c6a6305b6a0ef65d9e8a6be4d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a2ef679b6ab7f7540ecf99bb10d1a7801fba4fc3bcbfef414a74d26d2410edd1\"" May 27 17:43:19.571084 containerd[1563]: time="2025-05-27T17:43:19.571046095Z" level=info msg="StartContainer for \"a2ef679b6ab7f7540ecf99bb10d1a7801fba4fc3bcbfef414a74d26d2410edd1\"" May 27 17:43:19.575541 containerd[1563]: time="2025-05-27T17:43:19.575495186Z" level=info msg="connecting to shim a2ef679b6ab7f7540ecf99bb10d1a7801fba4fc3bcbfef414a74d26d2410edd1" address="unix:///run/containerd/s/65b8641bb3d1167643f7889ef7e333e7dc35d1c6100e532788f0ca7847b2d04d" protocol=ttrpc version=3 May 27 17:43:19.632894 systemd[1]: Started cri-containerd-a2ef679b6ab7f7540ecf99bb10d1a7801fba4fc3bcbfef414a74d26d2410edd1.scope - libcontainer container a2ef679b6ab7f7540ecf99bb10d1a7801fba4fc3bcbfef414a74d26d2410edd1. May 27 17:43:19.667729 containerd[1563]: time="2025-05-27T17:43:19.667655712Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:43:19.669498 containerd[1563]: time="2025-05-27T17:43:19.669362463Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:43:19.669966 containerd[1563]: time="2025-05-27T17:43:19.669409183Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 17:43:19.670453 kubelet[2767]: E0527 17:43:19.670401 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:43:19.670971 kubelet[2767]: E0527 17:43:19.670681 2767 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:43:19.670971 kubelet[2767]: E0527 17:43:19.670867 2767 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a7966d063d7546a698b9174a0808bfab,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2vzcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-775674897b-2d69w_calico-system(c331eeb0-0298-4bf1-b9ac-6574f2928103): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:43:19.674441 containerd[1563]: time="2025-05-27T17:43:19.674352723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 17:43:19.735540 containerd[1563]: time="2025-05-27T17:43:19.735491211Z" level=info msg="StartContainer for \"a2ef679b6ab7f7540ecf99bb10d1a7801fba4fc3bcbfef414a74d26d2410edd1\" returns successfully" May 27 17:43:19.804069 containerd[1563]: time="2025-05-27T17:43:19.803976732Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:43:19.805967 containerd[1563]: time="2025-05-27T17:43:19.805902909Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:43:19.806554 containerd[1563]: time="2025-05-27T17:43:19.806033150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 17:43:19.806899 kubelet[2767]: E0527 17:43:19.806663 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:43:19.807323 kubelet[2767]: E0527 17:43:19.806970 2767 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:43:19.807323 kubelet[2767]: E0527 17:43:19.807210 2767 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vzcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-775674897b-2d69w_calico-system(c331eeb0-0298-4bf1-b9ac-6574f2928103): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:43:19.808577 containerd[1563]: time="2025-05-27T17:43:19.808391604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 27 17:43:19.810313 kubelet[2767]: E0527 17:43:19.809435 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-775674897b-2d69w" podUID="c331eeb0-0298-4bf1-b9ac-6574f2928103" May 27 17:43:21.240409 containerd[1563]: time="2025-05-27T17:43:21.240342532Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:43:21.241796 containerd[1563]: time="2025-05-27T17:43:21.241739625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 27 17:43:21.243329 containerd[1563]: time="2025-05-27T17:43:21.243243056Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:43:21.245948 containerd[1563]: time="2025-05-27T17:43:21.245881432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:43:21.246921 containerd[1563]: time="2025-05-27T17:43:21.246748142Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 1.438038219s" May 27 17:43:21.246921 containerd[1563]: time="2025-05-27T17:43:21.246794232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 27 17:43:21.251865 containerd[1563]: time="2025-05-27T17:43:21.251823997Z" level=info msg="CreateContainer within sandbox \"6ab463930bf20f5ce272ea7e7a2376b0bfc6c47c6a6305b6a0ef65d9e8a6be4d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 27 17:43:21.264751 containerd[1563]: time="2025-05-27T17:43:21.264704311Z" level=info msg="Container 28302ab3374174ace3a62b7902d858f1d50d0b072c6d306a6a6413f4b405dc40: CDI devices from CRI Config.CDIDevices: []" May 27 17:43:21.278981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2931611986.mount: Deactivated successfully. May 27 17:43:21.284923 containerd[1563]: time="2025-05-27T17:43:21.284861607Z" level=info msg="CreateContainer within sandbox \"6ab463930bf20f5ce272ea7e7a2376b0bfc6c47c6a6305b6a0ef65d9e8a6be4d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"28302ab3374174ace3a62b7902d858f1d50d0b072c6d306a6a6413f4b405dc40\"" May 27 17:43:21.286062 containerd[1563]: time="2025-05-27T17:43:21.286023779Z" level=info msg="StartContainer for \"28302ab3374174ace3a62b7902d858f1d50d0b072c6d306a6a6413f4b405dc40\"" May 27 17:43:21.288686 containerd[1563]: time="2025-05-27T17:43:21.288632878Z" level=info msg="connecting to shim 28302ab3374174ace3a62b7902d858f1d50d0b072c6d306a6a6413f4b405dc40" address="unix:///run/containerd/s/65b8641bb3d1167643f7889ef7e333e7dc35d1c6100e532788f0ca7847b2d04d" protocol=ttrpc version=3 May 27 17:43:21.333527 systemd[1]: Started cri-containerd-28302ab3374174ace3a62b7902d858f1d50d0b072c6d306a6a6413f4b405dc40.scope - libcontainer container 28302ab3374174ace3a62b7902d858f1d50d0b072c6d306a6a6413f4b405dc40. May 27 17:43:21.400953 containerd[1563]: time="2025-05-27T17:43:21.400805292Z" level=info msg="StartContainer for \"28302ab3374174ace3a62b7902d858f1d50d0b072c6d306a6a6413f4b405dc40\" returns successfully" May 27 17:43:21.858894 kubelet[2767]: I0527 17:43:21.858738 2767 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 27 17:43:21.858894 kubelet[2767]: I0527 17:43:21.858901 2767 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 27 17:43:22.120507 kubelet[2767]: I0527 17:43:22.120265 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4vmcb" podStartSLOduration=26.22913379 podStartE2EDuration="36.120237605s" podCreationTimestamp="2025-05-27 17:42:46 +0000 UTC" firstStartedPulling="2025-05-27 17:43:11.356998136 +0000 UTC m=+47.806462563" lastFinishedPulling="2025-05-27 17:43:21.248101961 +0000 UTC m=+57.697566378" observedRunningTime="2025-05-27 17:43:22.11925799 +0000 UTC m=+58.568722440" watchObservedRunningTime="2025-05-27 17:43:22.120237605 +0000 UTC m=+58.569702045" May 27 17:43:27.377525 kubelet[2767]: I0527 17:43:27.377007 2767 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:43:29.706311 containerd[1563]: time="2025-05-27T17:43:29.705975118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 17:43:29.831428 containerd[1563]: time="2025-05-27T17:43:29.831358734Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:43:29.833111 containerd[1563]: time="2025-05-27T17:43:29.833007489Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:43:29.833331 containerd[1563]: time="2025-05-27T17:43:29.833023936Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 17:43:29.833444 kubelet[2767]: E0527 17:43:29.833383 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:43:29.834349 kubelet[2767]: E0527 17:43:29.833459 2767 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:43:29.834349 kubelet[2767]: E0527 17:43:29.833653 2767 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-828bq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-hnfkh_calico-system(17a31152-9656-4e7b-b59d-6595424c4d7e): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:43:29.835362 kubelet[2767]: E0527 17:43:29.835302 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hnfkh" podUID="17a31152-9656-4e7b-b59d-6595424c4d7e" May 27 17:43:31.218596 kubelet[2767]: I0527 17:43:31.218408 2767 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:43:34.707318 kubelet[2767]: E0527 17:43:34.707220 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-775674897b-2d69w" podUID="c331eeb0-0298-4bf1-b9ac-6574f2928103" May 27 17:43:36.634586 containerd[1563]: time="2025-05-27T17:43:36.634520316Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb8861621cfbdd6d0bb1e34f98427dbf66a1bf73e1f6d6fd375e140dc07d81c7\" id:\"6c3b7f211c6211ab720466714e306466202538a1b63bcc38b9698ef842506554\" pid:5109 exited_at:{seconds:1748367816 nanos:633981290}" May 27 17:43:38.211858 containerd[1563]: time="2025-05-27T17:43:38.211575115Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f059589bfbc162d5b1bc78917d9e9cde4d9afd48c27917638ac7126c9167e9b\" id:\"ef28148cc35fe8502eef19f56cb67ebeb331cd5fcea18d842318367facf20385\" pid:5133 exited_at:{seconds:1748367818 nanos:208138945}" May 27 17:43:42.705340 kubelet[2767]: E0527 17:43:42.704984 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hnfkh" podUID="17a31152-9656-4e7b-b59d-6595424c4d7e" May 27 17:43:45.709145 containerd[1563]: time="2025-05-27T17:43:45.708991413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 17:43:45.854293 containerd[1563]: time="2025-05-27T17:43:45.854031548Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:43:45.855866 containerd[1563]: time="2025-05-27T17:43:45.855779377Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:43:45.857473 containerd[1563]: time="2025-05-27T17:43:45.855831386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 17:43:45.857736 kubelet[2767]: E0527 17:43:45.857679 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:43:45.858447 kubelet[2767]: E0527 17:43:45.857755 2767 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:43:45.858447 kubelet[2767]: E0527 17:43:45.857946 2767 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a7966d063d7546a698b9174a0808bfab,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2vzcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-775674897b-2d69w_calico-system(c331eeb0-0298-4bf1-b9ac-6574f2928103): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:43:45.860828 containerd[1563]: time="2025-05-27T17:43:45.860794122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 17:43:45.983063 containerd[1563]: time="2025-05-27T17:43:45.982893963Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:43:45.984799 containerd[1563]: time="2025-05-27T17:43:45.984699949Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:43:45.985045 containerd[1563]: time="2025-05-27T17:43:45.984748038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 17:43:45.985292 kubelet[2767]: E0527 17:43:45.985204 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:43:45.985437 kubelet[2767]: E0527 17:43:45.985298 2767 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:43:45.986053 kubelet[2767]: E0527 17:43:45.985962 2767 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vzcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-775674897b-2d69w_calico-system(c331eeb0-0298-4bf1-b9ac-6574f2928103): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:43:45.987296 kubelet[2767]: E0527 17:43:45.987215 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-775674897b-2d69w" podUID="c331eeb0-0298-4bf1-b9ac-6574f2928103" May 27 17:43:49.174852 containerd[1563]: time="2025-05-27T17:43:49.174776885Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f059589bfbc162d5b1bc78917d9e9cde4d9afd48c27917638ac7126c9167e9b\" id:\"e536cd7ac4e2a8989a0ba2323b14626db5939e6b7238c856faa4b66441fa1eff\" pid:5161 exited_at:{seconds:1748367829 nanos:172328315}" May 27 17:43:49.366412 systemd[1]: Started sshd@7-10.128.0.9:22-139.178.68.195:39144.service - OpenSSH per-connection server daemon (139.178.68.195:39144). May 27 17:43:49.746326 sshd[5174]: Accepted publickey for core from 139.178.68.195 port 39144 ssh2: RSA SHA256:7uv6hSrfUOFE2GLtdKQWjXT5BnA3UXN539X7K8JeYEc May 27 17:43:49.751715 sshd-session[5174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:43:49.765167 systemd-logind[1546]: New session 8 of user core. May 27 17:43:49.772077 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 17:43:50.176864 sshd[5176]: Connection closed by 139.178.68.195 port 39144 May 27 17:43:50.178582 sshd-session[5174]: pam_unix(sshd:session): session closed for user core May 27 17:43:50.186349 systemd-logind[1546]: Session 8 logged out. Waiting for processes to exit. May 27 17:43:50.187774 systemd[1]: sshd@7-10.128.0.9:22-139.178.68.195:39144.service: Deactivated successfully. May 27 17:43:50.193984 systemd[1]: session-8.scope: Deactivated successfully. May 27 17:43:50.200773 systemd-logind[1546]: Removed session 8. May 27 17:43:55.247680 systemd[1]: Started sshd@8-10.128.0.9:22-139.178.68.195:44894.service - OpenSSH per-connection server daemon (139.178.68.195:44894). May 27 17:43:55.624446 sshd[5189]: Accepted publickey for core from 139.178.68.195 port 44894 ssh2: RSA SHA256:7uv6hSrfUOFE2GLtdKQWjXT5BnA3UXN539X7K8JeYEc May 27 17:43:55.626685 sshd-session[5189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:43:55.636942 systemd-logind[1546]: New session 9 of user core. May 27 17:43:55.643999 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 17:43:56.006422 sshd[5199]: Connection closed by 139.178.68.195 port 44894 May 27 17:43:56.004501 sshd-session[5189]: pam_unix(sshd:session): session closed for user core May 27 17:43:56.012562 systemd[1]: sshd@8-10.128.0.9:22-139.178.68.195:44894.service: Deactivated successfully. May 27 17:43:56.018296 systemd[1]: session-9.scope: Deactivated successfully. May 27 17:43:56.021877 systemd-logind[1546]: Session 9 logged out. Waiting for processes to exit. May 27 17:43:56.026406 systemd-logind[1546]: Removed session 9. May 27 17:43:57.707007 containerd[1563]: time="2025-05-27T17:43:57.706187353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 17:43:57.916657 containerd[1563]: time="2025-05-27T17:43:57.916582169Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:43:57.970854 containerd[1563]: time="2025-05-27T17:43:57.970169545Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:43:57.970854 containerd[1563]: time="2025-05-27T17:43:57.970181468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 17:43:57.971863 kubelet[2767]: E0527 17:43:57.971465 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:43:57.971863 kubelet[2767]: E0527 17:43:57.971539 2767 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:43:57.971863 kubelet[2767]: E0527 17:43:57.971741 2767 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-828bq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-hnfkh_calico-system(17a31152-9656-4e7b-b59d-6595424c4d7e): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:43:57.973405 kubelet[2767]: E0527 17:43:57.973268 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hnfkh" podUID="17a31152-9656-4e7b-b59d-6595424c4d7e" May 27 17:43:59.706262 kubelet[2767]: E0527 17:43:59.706032 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-775674897b-2d69w" podUID="c331eeb0-0298-4bf1-b9ac-6574f2928103" May 27 17:44:01.075413 systemd[1]: Started sshd@9-10.128.0.9:22-139.178.68.195:44908.service - OpenSSH per-connection server daemon (139.178.68.195:44908). May 27 17:44:01.458559 sshd[5215]: Accepted publickey for core from 139.178.68.195 port 44908 ssh2: RSA SHA256:7uv6hSrfUOFE2GLtdKQWjXT5BnA3UXN539X7K8JeYEc May 27 17:44:01.461435 sshd-session[5215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:44:01.476407 systemd-logind[1546]: New session 10 of user core. May 27 17:44:01.479696 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 17:44:01.841266 sshd[5217]: Connection closed by 139.178.68.195 port 44908 May 27 17:44:01.843426 sshd-session[5215]: pam_unix(sshd:session): session closed for user core May 27 17:44:01.852782 systemd-logind[1546]: Session 10 logged out. Waiting for processes to exit. May 27 17:44:01.854562 systemd[1]: sshd@9-10.128.0.9:22-139.178.68.195:44908.service: Deactivated successfully. May 27 17:44:01.859517 systemd[1]: session-10.scope: Deactivated successfully. May 27 17:44:01.863873 systemd-logind[1546]: Removed session 10. May 27 17:44:01.909977 systemd[1]: Started sshd@10-10.128.0.9:22-139.178.68.195:44912.service - OpenSSH per-connection server daemon (139.178.68.195:44912). May 27 17:44:02.289057 sshd[5230]: Accepted publickey for core from 139.178.68.195 port 44912 ssh2: RSA SHA256:7uv6hSrfUOFE2GLtdKQWjXT5BnA3UXN539X7K8JeYEc May 27 17:44:02.291118 sshd-session[5230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:44:02.300499 systemd-logind[1546]: New session 11 of user core. May 27 17:44:02.307735 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 17:44:02.788541 sshd[5232]: Connection closed by 139.178.68.195 port 44912 May 27 17:44:02.791605 sshd-session[5230]: pam_unix(sshd:session): session closed for user core May 27 17:44:02.800162 systemd-logind[1546]: Session 11 logged out. Waiting for processes to exit. May 27 17:44:02.800958 systemd[1]: sshd@10-10.128.0.9:22-139.178.68.195:44912.service: Deactivated successfully. May 27 17:44:02.806127 systemd[1]: session-11.scope: Deactivated successfully. May 27 17:44:02.813331 systemd-logind[1546]: Removed session 11. May 27 17:44:02.857775 systemd[1]: Started sshd@11-10.128.0.9:22-139.178.68.195:44918.service - OpenSSH per-connection server daemon (139.178.68.195:44918). May 27 17:44:03.233752 sshd[5242]: Accepted publickey for core from 139.178.68.195 port 44918 ssh2: RSA SHA256:7uv6hSrfUOFE2GLtdKQWjXT5BnA3UXN539X7K8JeYEc May 27 17:44:03.237939 sshd-session[5242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:44:03.251386 systemd-logind[1546]: New session 12 of user core. May 27 17:44:03.256850 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 17:44:03.609800 sshd[5244]: Connection closed by 139.178.68.195 port 44918 May 27 17:44:03.612525 sshd-session[5242]: pam_unix(sshd:session): session closed for user core May 27 17:44:03.620026 systemd[1]: sshd@11-10.128.0.9:22-139.178.68.195:44918.service: Deactivated successfully. May 27 17:44:03.621327 systemd-logind[1546]: Session 12 logged out. Waiting for processes to exit. May 27 17:44:03.625816 systemd[1]: session-12.scope: Deactivated successfully. May 27 17:44:03.631947 systemd-logind[1546]: Removed session 12. May 27 17:44:06.682039 containerd[1563]: time="2025-05-27T17:44:06.681974134Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb8861621cfbdd6d0bb1e34f98427dbf66a1bf73e1f6d6fd375e140dc07d81c7\" id:\"ae6f200d0f09b9ab82261566b0a14969589862fe7fb3c67ea7d5e7da1404dce2\" pid:5274 exit_status:1 exited_at:{seconds:1748367846 nanos:681202270}" May 27 17:44:08.686181 systemd[1]: Started sshd@12-10.128.0.9:22-139.178.68.195:55402.service - OpenSSH per-connection server daemon (139.178.68.195:55402). May 27 17:44:09.067526 sshd[5286]: Accepted publickey for core from 139.178.68.195 port 55402 ssh2: RSA SHA256:7uv6hSrfUOFE2GLtdKQWjXT5BnA3UXN539X7K8JeYEc May 27 17:44:09.070153 sshd-session[5286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:44:09.083431 systemd-logind[1546]: New session 13 of user core. May 27 17:44:09.088766 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 17:44:09.464470 sshd[5288]: Connection closed by 139.178.68.195 port 55402 May 27 17:44:09.466583 sshd-session[5286]: pam_unix(sshd:session): session closed for user core May 27 17:44:09.475766 systemd[1]: sshd@12-10.128.0.9:22-139.178.68.195:55402.service: Deactivated successfully. May 27 17:44:09.482198 systemd[1]: session-13.scope: Deactivated successfully. May 27 17:44:09.486227 systemd-logind[1546]: Session 13 logged out. Waiting for processes to exit. May 27 17:44:09.490340 systemd-logind[1546]: Removed session 13. May 27 17:44:09.704905 kubelet[2767]: E0527 17:44:09.703969 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hnfkh" podUID="17a31152-9656-4e7b-b59d-6595424c4d7e" May 27 17:44:13.709100 kubelet[2767]: E0527 17:44:13.709036 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-775674897b-2d69w" podUID="c331eeb0-0298-4bf1-b9ac-6574f2928103" May 27 17:44:14.531886 systemd[1]: Started sshd@13-10.128.0.9:22-139.178.68.195:35770.service - OpenSSH per-connection server daemon (139.178.68.195:35770). May 27 17:44:14.919863 sshd[5300]: Accepted publickey for core from 139.178.68.195 port 35770 ssh2: RSA SHA256:7uv6hSrfUOFE2GLtdKQWjXT5BnA3UXN539X7K8JeYEc May 27 17:44:14.923570 sshd-session[5300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:44:14.933223 systemd-logind[1546]: New session 14 of user core. May 27 17:44:14.943903 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 17:44:15.312724 sshd[5302]: Connection closed by 139.178.68.195 port 35770 May 27 17:44:15.313578 sshd-session[5300]: pam_unix(sshd:session): session closed for user core May 27 17:44:15.323004 systemd-logind[1546]: Session 14 logged out. Waiting for processes to exit. May 27 17:44:15.325103 systemd[1]: sshd@13-10.128.0.9:22-139.178.68.195:35770.service: Deactivated successfully. May 27 17:44:15.330931 systemd[1]: session-14.scope: Deactivated successfully. May 27 17:44:15.337894 systemd-logind[1546]: Removed session 14. May 27 17:44:19.160187 containerd[1563]: time="2025-05-27T17:44:19.160120419Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f059589bfbc162d5b1bc78917d9e9cde4d9afd48c27917638ac7126c9167e9b\" id:\"f6d8b61717376bbfce73c143359cd40ef9db0cd96fd132d2facfd0a80a82971f\" pid:5324 exited_at:{seconds:1748367859 nanos:159623458}" May 27 17:44:20.377097 systemd[1]: Started sshd@14-10.128.0.9:22-139.178.68.195:35778.service - OpenSSH per-connection server daemon (139.178.68.195:35778). May 27 17:44:20.741564 sshd[5334]: Accepted publickey for core from 139.178.68.195 port 35778 ssh2: RSA SHA256:7uv6hSrfUOFE2GLtdKQWjXT5BnA3UXN539X7K8JeYEc May 27 17:44:20.745440 sshd-session[5334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:44:20.753473 systemd-logind[1546]: New session 15 of user core. May 27 17:44:20.759521 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 17:44:21.072905 sshd[5336]: Connection closed by 139.178.68.195 port 35778 May 27 17:44:21.074217 sshd-session[5334]: pam_unix(sshd:session): session closed for user core May 27 17:44:21.080429 systemd[1]: sshd@14-10.128.0.9:22-139.178.68.195:35778.service: Deactivated successfully. May 27 17:44:21.083804 systemd[1]: session-15.scope: Deactivated successfully. May 27 17:44:21.085259 systemd-logind[1546]: Session 15 logged out. Waiting for processes to exit. May 27 17:44:21.087565 systemd-logind[1546]: Removed session 15. May 27 17:44:21.705326 kubelet[2767]: E0527 17:44:21.704769 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hnfkh" podUID="17a31152-9656-4e7b-b59d-6595424c4d7e" May 27 17:44:25.710564 kubelet[2767]: E0527 17:44:25.710245 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-775674897b-2d69w" podUID="c331eeb0-0298-4bf1-b9ac-6574f2928103" May 27 17:44:26.142851 systemd[1]: Started sshd@15-10.128.0.9:22-139.178.68.195:52500.service - OpenSSH per-connection server daemon (139.178.68.195:52500). May 27 17:44:26.524542 sshd[5351]: Accepted publickey for core from 139.178.68.195 port 52500 ssh2: RSA SHA256:7uv6hSrfUOFE2GLtdKQWjXT5BnA3UXN539X7K8JeYEc May 27 17:44:26.527120 sshd-session[5351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:44:26.540597 systemd-logind[1546]: New session 16 of user core. May 27 17:44:26.545711 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 17:44:26.913310 sshd[5353]: Connection closed by 139.178.68.195 port 52500 May 27 17:44:26.914106 sshd-session[5351]: pam_unix(sshd:session): session closed for user core May 27 17:44:26.922952 systemd[1]: sshd@15-10.128.0.9:22-139.178.68.195:52500.service: Deactivated successfully. May 27 17:44:26.927701 systemd[1]: session-16.scope: Deactivated successfully. May 27 17:44:26.929825 systemd-logind[1546]: Session 16 logged out. Waiting for processes to exit. May 27 17:44:26.933881 systemd-logind[1546]: Removed session 16. May 27 17:44:26.983692 systemd[1]: Started sshd@16-10.128.0.9:22-139.178.68.195:52508.service - OpenSSH per-connection server daemon (139.178.68.195:52508). May 27 17:44:27.371432 sshd[5365]: Accepted publickey for core from 139.178.68.195 port 52508 ssh2: RSA SHA256:7uv6hSrfUOFE2GLtdKQWjXT5BnA3UXN539X7K8JeYEc May 27 17:44:27.375731 sshd-session[5365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:44:27.386738 systemd-logind[1546]: New session 17 of user core. May 27 17:44:27.395519 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 17:44:27.840902 sshd[5367]: Connection closed by 139.178.68.195 port 52508 May 27 17:44:27.841720 sshd-session[5365]: pam_unix(sshd:session): session closed for user core May 27 17:44:27.852378 systemd[1]: sshd@16-10.128.0.9:22-139.178.68.195:52508.service: Deactivated successfully. May 27 17:44:27.854575 systemd-logind[1546]: Session 17 logged out. Waiting for processes to exit. May 27 17:44:27.859538 systemd[1]: session-17.scope: Deactivated successfully. May 27 17:44:27.867917 systemd-logind[1546]: Removed session 17. May 27 17:44:27.911292 systemd[1]: Started sshd@17-10.128.0.9:22-139.178.68.195:52512.service - OpenSSH per-connection server daemon (139.178.68.195:52512). May 27 17:44:28.289060 sshd[5377]: Accepted publickey for core from 139.178.68.195 port 52512 ssh2: RSA SHA256:7uv6hSrfUOFE2GLtdKQWjXT5BnA3UXN539X7K8JeYEc May 27 17:44:28.291117 sshd-session[5377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:44:28.301257 systemd-logind[1546]: New session 18 of user core. May 27 17:44:28.304735 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 17:44:29.620719 sshd[5379]: Connection closed by 139.178.68.195 port 52512 May 27 17:44:29.622957 sshd-session[5377]: pam_unix(sshd:session): session closed for user core May 27 17:44:29.630596 systemd[1]: sshd@17-10.128.0.9:22-139.178.68.195:52512.service: Deactivated successfully. May 27 17:44:29.636241 systemd[1]: session-18.scope: Deactivated successfully. May 27 17:44:29.641818 systemd-logind[1546]: Session 18 logged out. Waiting for processes to exit. May 27 17:44:29.644889 systemd-logind[1546]: Removed session 18. May 27 17:44:29.689678 systemd[1]: Started sshd@18-10.128.0.9:22-139.178.68.195:52516.service - OpenSSH per-connection server daemon (139.178.68.195:52516). May 27 17:44:30.074517 sshd[5397]: Accepted publickey for core from 139.178.68.195 port 52516 ssh2: RSA SHA256:7uv6hSrfUOFE2GLtdKQWjXT5BnA3UXN539X7K8JeYEc May 27 17:44:30.077618 sshd-session[5397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:44:30.087541 systemd-logind[1546]: New session 19 of user core. May 27 17:44:30.094388 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 17:44:30.678303 sshd[5399]: Connection closed by 139.178.68.195 port 52516 May 27 17:44:30.680197 sshd-session[5397]: pam_unix(sshd:session): session closed for user core May 27 17:44:30.689444 systemd-logind[1546]: Session 19 logged out. Waiting for processes to exit. May 27 17:44:30.690104 systemd[1]: sshd@18-10.128.0.9:22-139.178.68.195:52516.service: Deactivated successfully. May 27 17:44:30.696961 systemd[1]: session-19.scope: Deactivated successfully. May 27 17:44:30.704233 systemd-logind[1546]: Removed session 19. May 27 17:44:30.748385 systemd[1]: Started sshd@19-10.128.0.9:22-139.178.68.195:52524.service - OpenSSH per-connection server daemon (139.178.68.195:52524). May 27 17:44:31.143346 sshd[5412]: Accepted publickey for core from 139.178.68.195 port 52524 ssh2: RSA SHA256:7uv6hSrfUOFE2GLtdKQWjXT5BnA3UXN539X7K8JeYEc May 27 17:44:31.146901 sshd-session[5412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:44:31.158666 systemd-logind[1546]: New session 20 of user core. May 27 17:44:31.165536 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 17:44:31.519371 sshd[5414]: Connection closed by 139.178.68.195 port 52524 May 27 17:44:31.520708 sshd-session[5412]: pam_unix(sshd:session): session closed for user core May 27 17:44:31.530967 systemd[1]: sshd@19-10.128.0.9:22-139.178.68.195:52524.service: Deactivated successfully. May 27 17:44:31.536143 systemd[1]: session-20.scope: Deactivated successfully. May 27 17:44:31.538336 systemd-logind[1546]: Session 20 logged out. Waiting for processes to exit. May 27 17:44:31.541509 systemd-logind[1546]: Removed session 20. May 27 17:44:35.707574 kubelet[2767]: E0527 17:44:35.707509 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hnfkh" podUID="17a31152-9656-4e7b-b59d-6595424c4d7e" May 27 17:44:36.588805 systemd[1]: Started sshd@20-10.128.0.9:22-139.178.68.195:42506.service - OpenSSH per-connection server daemon (139.178.68.195:42506). May 27 17:44:36.841410 containerd[1563]: time="2025-05-27T17:44:36.840656928Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb8861621cfbdd6d0bb1e34f98427dbf66a1bf73e1f6d6fd375e140dc07d81c7\" id:\"bb81993edb64b7046ce6041c4bf30fa475c74c6bd72d1acdd784604c679bb3f9\" pid:5446 exited_at:{seconds:1748367876 nanos:839762585}" May 27 17:44:36.989109 sshd[5447]: Accepted publickey for core from 139.178.68.195 port 42506 ssh2: RSA SHA256:7uv6hSrfUOFE2GLtdKQWjXT5BnA3UXN539X7K8JeYEc May 27 17:44:36.991697 sshd-session[5447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:44:37.000344 systemd-logind[1546]: New session 21 of user core. May 27 17:44:37.009638 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 17:44:37.378222 sshd[5459]: Connection closed by 139.178.68.195 port 42506 May 27 17:44:37.379722 sshd-session[5447]: pam_unix(sshd:session): session closed for user core May 27 17:44:37.392988 systemd[1]: sshd@20-10.128.0.9:22-139.178.68.195:42506.service: Deactivated successfully. May 27 17:44:37.393309 systemd-logind[1546]: Session 21 logged out. Waiting for processes to exit. May 27 17:44:37.398585 systemd[1]: session-21.scope: Deactivated successfully. May 27 17:44:37.404780 systemd-logind[1546]: Removed session 21. May 27 17:44:38.395316 containerd[1563]: time="2025-05-27T17:44:38.395199417Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f059589bfbc162d5b1bc78917d9e9cde4d9afd48c27917638ac7126c9167e9b\" id:\"d05172ffa5aed1feb3db2b850837548ccb2a694140bac732f2dba3505edeb3c9\" pid:5485 exited_at:{seconds:1748367878 nanos:394239582}" May 27 17:44:40.706768 containerd[1563]: time="2025-05-27T17:44:40.705937215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 17:44:40.844420 containerd[1563]: time="2025-05-27T17:44:40.844178309Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:44:40.845922 containerd[1563]: time="2025-05-27T17:44:40.845876808Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 17:44:40.846298 containerd[1563]: time="2025-05-27T17:44:40.846105926Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:44:40.847205 kubelet[2767]: E0527 17:44:40.846415 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:44:40.847205 kubelet[2767]: E0527 17:44:40.846495 2767 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:44:40.847205 kubelet[2767]: E0527 17:44:40.847117 2767 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a7966d063d7546a698b9174a0808bfab,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2vzcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-775674897b-2d69w_calico-system(c331eeb0-0298-4bf1-b9ac-6574f2928103): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:44:40.850877 containerd[1563]: time="2025-05-27T17:44:40.850839187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 17:44:41.008940 containerd[1563]: time="2025-05-27T17:44:41.008762783Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:44:41.010827 containerd[1563]: time="2025-05-27T17:44:41.010638376Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:44:41.010827 containerd[1563]: time="2025-05-27T17:44:41.010777801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 17:44:41.011101 kubelet[2767]: E0527 17:44:41.010992 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:44:41.011101 kubelet[2767]: E0527 17:44:41.011058 2767 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:44:41.011458 kubelet[2767]: E0527 17:44:41.011222 2767 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vzcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-775674897b-2d69w_calico-system(c331eeb0-0298-4bf1-b9ac-6574f2928103): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:44:41.013419 kubelet[2767]: E0527 17:44:41.013346 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-775674897b-2d69w" podUID="c331eeb0-0298-4bf1-b9ac-6574f2928103" May 27 17:44:42.444013 systemd[1]: Started sshd@21-10.128.0.9:22-139.178.68.195:42522.service - OpenSSH per-connection server daemon (139.178.68.195:42522). May 27 17:44:42.827940 sshd[5495]: Accepted publickey for core from 139.178.68.195 port 42522 ssh2: RSA SHA256:7uv6hSrfUOFE2GLtdKQWjXT5BnA3UXN539X7K8JeYEc May 27 17:44:42.831945 sshd-session[5495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:44:42.844923 systemd-logind[1546]: New session 22 of user core. May 27 17:44:42.851172 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 17:44:43.198075 sshd[5497]: Connection closed by 139.178.68.195 port 42522 May 27 17:44:43.198960 sshd-session[5495]: pam_unix(sshd:session): session closed for user core May 27 17:44:43.207615 systemd-logind[1546]: Session 22 logged out. Waiting for processes to exit. May 27 17:44:43.209266 systemd[1]: sshd@21-10.128.0.9:22-139.178.68.195:42522.service: Deactivated successfully. May 27 17:44:43.214252 systemd[1]: session-22.scope: Deactivated successfully. May 27 17:44:43.217958 systemd-logind[1546]: Removed session 22. May 27 17:44:48.270544 systemd[1]: Started sshd@22-10.128.0.9:22-139.178.68.195:55272.service - OpenSSH per-connection server daemon (139.178.68.195:55272). May 27 17:44:48.651303 sshd[5521]: Accepted publickey for core from 139.178.68.195 port 55272 ssh2: RSA SHA256:7uv6hSrfUOFE2GLtdKQWjXT5BnA3UXN539X7K8JeYEc May 27 17:44:48.654739 sshd-session[5521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:44:48.668972 systemd-logind[1546]: New session 23 of user core. May 27 17:44:48.675498 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 17:44:48.706642 containerd[1563]: time="2025-05-27T17:44:48.706583356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 17:44:48.836305 containerd[1563]: time="2025-05-27T17:44:48.836142836Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:44:48.839437 containerd[1563]: time="2025-05-27T17:44:48.839336294Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:44:48.839945 containerd[1563]: time="2025-05-27T17:44:48.839380059Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 17:44:48.840291 kubelet[2767]: E0527 17:44:48.840183 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:44:48.841519 kubelet[2767]: E0527 17:44:48.840257 2767 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:44:48.841519 kubelet[2767]: E0527 17:44:48.841058 2767 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-828bq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-hnfkh_calico-system(17a31152-9656-4e7b-b59d-6595424c4d7e): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:44:48.842505 kubelet[2767]: E0527 17:44:48.842449 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hnfkh" podUID="17a31152-9656-4e7b-b59d-6595424c4d7e" May 27 17:44:49.049362 sshd[5523]: Connection closed by 139.178.68.195 port 55272 May 27 17:44:49.052651 sshd-session[5521]: pam_unix(sshd:session): session closed for user core May 27 17:44:49.062436 systemd-logind[1546]: Session 23 logged out. Waiting for processes to exit. May 27 17:44:49.063069 systemd[1]: sshd@22-10.128.0.9:22-139.178.68.195:55272.service: Deactivated successfully. May 27 17:44:49.067575 systemd[1]: session-23.scope: Deactivated successfully. May 27 17:44:49.073839 systemd-logind[1546]: Removed session 23. May 27 17:44:49.156315 containerd[1563]: time="2025-05-27T17:44:49.155966266Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f059589bfbc162d5b1bc78917d9e9cde4d9afd48c27917638ac7126c9167e9b\" id:\"6f8b543d8859ec51dd6241a522c9fc64f9a06b2d0374f7af22777e0499b67bcb\" pid:5547 exited_at:{seconds:1748367889 nanos:155672782}"